forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
BJeY6sR9KX | Aligning Artificial Neural Networks to the Brain yields Shallow Recurrent Architectures | [
"Jonas Kubilius",
"Martin Schrimpf",
"Ha Hong",
"Najib J. Majaj",
"Rishi Rajalingham",
"Elias B. Issa",
"Kohitij Kar",
"Pouya Bashivan",
"Jonathan Prescott-Roy",
"Kailyn Schmidt",
"Aran Nayebi",
"Daniel Bear",
"Daniel L. K. Yamins",
"James J. DiCarlo"
] | Deep artificial neural networks with spatially repeated processing (a.k.a., deep convolutional ANNs) have been established as the best class of candidate models of visual processing in the primate ventral visual processing stream. Over the past five years, these ANNs have evolved from a simple feedforward eight-layer architecture in AlexNet to extremely deep and branching NASNet architectures, demonstrating increasingly better object categorization performance. Here we ask, as ANNs have continued to evolve in performance, are they also strong candidate models for the brain? To answer this question, we developed Brain-Score, a composite of neural and behavioral benchmarks for determining how brain-like a model is, together with an online platform where models can receive a Brain-Score and compare against other models.
Despite high scores, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. To further map onto anatomy and validate our approach, we built CORnet-S: an ANN guided by Brain-Score with the anatomical constraints of compactness and recurrence. Although a shallow model with four anatomically mapped areas and recurrent connectivity, CORnet-S is a top model on Brain-Score and outperforms similarly compact models on ImageNet. Analyzing CORnet-S circuitry variants revealed recurrence as the main predictive factor of both Brain-Score and ImageNet top-1 performance.
| [
"Computational Neuroscience",
"Brain-Inspired",
"Neural Networks",
"Simplified Models",
"Recurrent Neural Networks",
"Computer Vision"
] | https://openreview.net/pdf?id=BJeY6sR9KX | https://openreview.net/forum?id=BJeY6sR9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJg7LJeIxE",
"BkxCrz7fk4",
"B1g8YyWMkE",
"HJlTN1bzJE",
"S1lN6e1RRX",
"Bkgywd7cRm",
"BkxQpPQc0Q",
"HJlPsv75C7",
"BygKIUQ9R7",
"SJgSTjDd6X",
"B1ewCw7HTQ",
"r1goI8dgTQ",
"BkgUZyJi27",
"Hyx7wwpchX",
"BklwxmF537"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545105226714,
1543807558044,
1543798654337,
1543798580972,
1543528636390,
1543284823498,
1543284667442,
1543284639278,
1543284304848,
1542122428915,
1541908430521,
1541600851392,
1541234430131,
1541228379410,
1541210863401
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper824/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper824/Authors"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper824/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This work provides two contributions: 1) Brain-Score, that quantifies how a given network's responses compare to responses from natural systems; 2) CORnet-S, an architecture trained to optimize Brain-Score, that performs well on Imagenet.\\nAs noted by all reviewers, this work is interesting and shows a promising approach to quantifying how brain-like an architecture is, with the limitations inherent to the fact that there is a lot about natural visual processing that we don't fully understand. However, the work here starts from the premise that being more similar to current metrics of brain processes is by itself a good thing -- without a better understanding of what features of brain processing are responsible for good performance and which are mere by-products, this premise is not one that would appeal to most of ICLR audience. In fact, the best performing architectures on imagenet are not the best scoring for Brain-Score. Overall, this work is quite intriguing and well presented, but as pointed out by some reviewers, requires a \\\"leap of faith\\\" in matching signatures of brain processes that most of the ICLR audience is unlikely to be willing to take.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting take on quantifying similarity of networks to brain visual processing, unclear significance of that result for ICLR audiences\"}",
"{\"title\": \"concerns addressed\", \"comment\": \"Thank you for answering my comments, and for running the additional tests.\"}",
"{\"title\": \"additional questions?\", \"comment\": \"Hi,\\n\\nwe were wondering if our new analyses and comments were sufficient or whether you had additional remarks that we could address before the deadline?\"}",
"{\"title\": \"additional questions?\", \"comment\": \"Hi,\\n\\nwe were wondering if our new analyses and comments were sufficient or whether you had additional remarks that we could address before the deadline?\"}",
"{\"title\": \"Concerns addressed\", \"comment\": \"Thank you, my concerns were satisfactorily addressed.\"}",
"{\"title\": \"clarifications & generalization analyses\", \"comment\": \"Thank you for your review.\\n\\n1. Regarding CORnet optimization:\\n a. First, we\\u2019d like to clarify that the search for a good CORnet architecture was done by hand. We trained a few models with different architectures, evaluated their ImageNet and Brain-Score performance, and tried to improve architectural choices in the next iteration. While we tried to limit our knowledge of Brain-Scores during model building, you are rightfully pointing out that CORnet ought to be evaluated on an independent set.\\n\\n b. We therefore collected a new behavioral dataset that used new objects that we\\u2019ve not used before in any of our benchmarks. The correlation between the old and the new behavioral rankings was strong (.83).\\n\\n c. We also compared rankings on an independent neural dataset collected on the same images as before, and found a very strong correlation too (.93)\\n\\n d. To push models further, we also compared the rankings of neural predictivity on a very dissimilar dataset (a subset of MS COCO). The correlation was still robust (.76)\\n\\n e. The details of these analyses can be found in Section 4.2 and also in Appendix C. Overall, CORnet-S is among the top models for all these analyses.\\n\\n2. Regarding quantifying model generalization on other Machine Learning datasets: This is a good point, thus we strived to provide some measure of generalization in the revised manuscript (Fig. 2; Section 4.2). Following Kornblith et al. (2018), we compared model rankings when their classifiers were retrained for CIFAR-100. Here, we observed that Brain-Score is a good predictor of how well models will transfer to CIFAR-100 as fixed feature encoders (r=.69).\\n\\n3. Regarding the idea to optimize model search not using Brain-Score. This is an interesting idea, however, one that would take many more that just a few weeks to test:\\n a. As mentioned before, the search is not automated, so it might take a very long time (in the case of CORnet it took over a year)\\n\\n b. Training these models on ImageNet takes a few days and several GPUs at least, so given resource constraints, it would be challenging to perform this search.\\n\\n c. Given how correlated Brain-Score and ImageNet performance are, it is not unlikely that other datasets would lead to a similar circuitry. The key difference between optimizing for Brain-Score and optimizing for another dataset is that Brain-Score is not a proxy to what we want. Rather, it is the exact measure for how brain-like a given model is. Adding new recordings to Brain-Score that break existing models will thus further constrain these mechanistic hypotheses of the brain. By optimizing for Brain-Score we are making sure that we are optimizing for our ultimate goal \\u2014 a model of the human visual system.\"}",
"{\"title\": \"generalization analyses\", \"comment\": \"Thank you for your review!\\n\\n1. Contribution to ML:\\n\\n a. Following your comment, we tested how similar model ranking on Brain-Score is to the ranking on four independent and very different datasets (see Section 4.2 and Figure 3). We found a robust generalization of scores to other brain datasets (neural and behavioral) as well as to CIFAR-100. Using the models from this study as fixed feature encoders, Brain-Score is quite predictive of how well a model will perform on this ML dataset.\\n\\n b. Taking a step back, we emphasized in the revised paper that the main purpose of Brain-Score is to help build brain-like models. Those may or may not be also good machine learning models. For instance, human visual system seems to be more robust to various image perturbations, so machine learning models could benefit from that. On the other hand, we explicitly demand models to make mistakes if humans make mistakes. This is typically orthogonal to typical ML goals of reaching maximal performance.\\n\\n2. Contribution to neuroscience:\\n a. In the revised manuscript, we attempted to clarify (Sections 4.2 and 5) that constraining models with more and more brain data will necessarily converge them to the inner workings of the brain. Already today with three limited datasets that currently underlie Brain-Score we observe a very high predictive power of neural responses in the intermediate layers of primate visual system. These models have not been trained in any way to match these internal representations. We think this is a strong indication that the models are already good mechanistic hypotheses of the inner workings of the primate visual system. Adding data to Brain-Score that breaks existing models will help us to further refine these models in making them more brain-like.\\n\\n b. Perhaps this comment was meant to question whether, under Brain-Score\\u2019s guidance, we would ever find the circuitries that match those in the brain. Having no anatomical constraints in Brain-Score at the moment, it would be hard to expect those circuitries to emerge. As detailed in our CORnet-S circuitry analysis, many different circuitries appear to work sufficiently well at the moment. Perhaps this reflects the lack of constraints in Brain-Score at the moment. On the other hand, it remains unclear to what extent we want to model visual system to begin with. Do we need spikes? Biochemical details? Or should we focus on functional properties only? With Brain-Score, we follow the latter, at least for now, and, given that models can predict neural and behavioral responses very accurately, we think they are already a good functional match which now needs to be further refined as described above.\\n\\n(continued in separate comment due to max characters)\"}",
"{\"title\": \"clarifications\", \"comment\": \"3. The importance of recurrence. We now performed a detailed analysis of what makes CORnet-S work and the amount of recurrence indeed turned out to be the main predictive factor (see Figure 5) of both Brain-Score and ImageNet top-1 performance. Among other factors that played a role, we identified the presence of a skip connection, the size of the bottleneck, and the number of convolutions within each model area (for ImageNet only) as the key contributing factors to CORnet-S\\u2019 performance.\\n\\n4. We found that there was a correlation (.65 for V4 and .87 or IT) which is strong enough to connect neurons to behavior but not sufficient for behavior alone to explain the entire neural population, warranting a composite set of benchmarks.\\n\\n5. Can the number of neurons explain differences in neural predictivity?\\n a. In Figure 6 we show that there does not appear to be a correlation between the number of features and Brain-Score. \\n\\n b. Also note that normally neural predictivity analyses are done by first PCA\\u2019ing features down to 1000 components and then building a regression map between neural responses and model responses. This is perhaps another good control showing that at least the number of features that go into regression is always matched across models.\\n\\n6. Fig. 1 gray dots: We now included an explanation in the caption that these values come from a class of simplified five-layer models that had various hyperparameters manipulated (values captured at different points during model training). The purpose of this test was to see the correlation between ImageNet and Brain-Score in lower performance regimes.\\n\\n7. Consider for instance two models with IT scores of .580 and .581. Our current approach would simply use these values for the final Brain-Score without re-weighting. We also considered an alternative approach to compute the Brain-Score where models would be ranked on each benchmark separately and then receive a mean \\u201cBrain-Rank\\u201d. However in that case, the above example with two very close values would lead to the two models receiving different ranks with the same distance as if the values were .580 and .620. This is just to say that we chose to preserve the distance in scores as opposed to a ranking approach. We clarified this in the paper.\\n\\n8. Comparison against June 2018 arXiv paper. We now included a new section comparing CORnet-S to other recurrent models (see Section 3.2), including the June 2018 one. Briefly, CORnet-S is an attempt to build a high performing model (both on ImageNet and Brain-Score) while keeping it as simple as possible. The June model focuses more on the anatomical necessity of recurrent and feedback connections without constraining the model size. As a result, CORnet-S is substantially simpler (shallower, simpler connectivity, takes much less GPU memory, and is mapped to brain areas) than the June 2018 one.\"}",
"{\"title\": \"more analyses\", \"comment\": \"Thank you again for your review!\\n\\n1. Comments on using temporal responses:\\n This is a good suggestion that we were also interested in. To get a sense whether temporal information adds more discriminatory power on top of the existing benchmarks, we compared model predictivity at two response bins: 90-110 ms (early response) and 190-210 ms (late response), since, following the June 2018 arXiv paper, the main differences in model scores appeared to emerge at later time steps (>170 ms). The resulting model scores can be found in Figure 8. From our analysis, early responses seem to be well-predicted across most models. Differences emerge in the late responses, however they do not appear to be different from the model scores on the mean 70-170 ms data. Similarly to the arXiv paper, we found that a feed-forward model already predicts temporal responses very well, and the recurrent models only out-perform the feed-forward model when the image is explicitly taken off (cf. arXiv paper, Fig. 5C). We hypothesize that the images shown for the current set of recordings might simply not warrant the functional use of recurrent connections. However, we are hopeful that future recordings focusing in on this temporal aspect (such as https://www.biorxiv.org/content/early/2018/06/26/354753) might help to further distinguish between models. So far, we have not yet observed this and thus chose not to include these scores. To contribute to community efforts in this domain, we are releasing temporal (20 ms bins) neural recordings from the data presented in this paper in the Brain-Score platform.\\n\\n2. Comments on performing more analyses:\\n\\n a. This is a great suggestion and we quantified the necessity of each component in the CORnet-S circuity (see new Fig. 5 in the updated paper). We identified that having enough recurrent steps was the single most predictive factor. Having a large enough bottleneck and a skip connection was other important factors, especially for ImageNet. Other factors that contributed mostly to ImageNet performance were the number of convolutions within an area, the total number of areas in the model, and the use of Batch Normalization. Again, an even larger bottleneck that originally considered helped slightly for ImageNet performance, but not for Brain-Score. However, having one less layer on the block circuitry or one more area in the model did not seem to affect Brain-Score significantly.\\n\\n b. We also varied the length of the \\u201clocal\\u201d recurrent connection. Note that this recurrence is not exactly local as it goes two convolutional layers back. Reducing the length of this recurrent connection from two layers to one layer reduces model performance, and re-running features through the entire circuitry seems to be most beneficial. Considering that recurrence virtually increases the depth of the network, longer recurrent connections seem intuitive since they create the network with the largest virtual depth.\\n\\n c. It may also be interesting to add longer range feedback connections that would go across model areas, e.g., from V4 to V1. This is only meaningful when unrolling in time is done in a biological manner so that feedback can be properly integrated with feed-forward inputs. We tested several CORnet variants with biological time unrolling and feedback from V4 to V1 (but simpler block structure to fit the model into memory) but observed no real change in adding a feedback connection. Our interpretation is that feedback connections do not automatically help or hurt model\\u2019s performance but a wider search might pinpoint to specific types of connections that may help. However, this would be a topic for another study.\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Great, thank you for the quick clarification! All of these are great suggestions, and we are currently running analyses to quantify what parts are critical to the model.\"}",
"{\"title\": \"clarifying \\\"type of weight\\\"\", \"comment\": \"The question is generally asking about what specifically about the CORNET-S architecture makes it perform well\\non Brain-Score (and which for ImageNet) compared to other similar networks. The types of weights refers to feedback vs recurrent vs convolutional filters vs pooling (and connections between which layers are most critical).\", \"i_was_just_asking_for_some_form_of_ablation_study_or_analysis_similar_to_figure_4_in_https\": \"//arxiv.org/pdf/1807.00053v2.pdf to show what parts of your architecture you feel are critical for a good\\nmodel of visual processing in the brain.\"}",
"{\"title\": \"can you please clarify \\\"type of weight\\\"?\", \"comment\": \"Thank you for your review. Could you please clarify what you mean by the \\\"type of weight in the network\\\"?\"}",
"{\"title\": \"Not enough analysis\", \"review\": \"Please consider this rubric when writing your review:\\n1. Briefly establish your personal expertise in the field of the paper.\\n2. Concisely summarize the contributions of the paper.\\n3. Evaluate the quality and composition of the work.\\n4. Place the work in context of prior work, and evaluate this work's novelty.\\n5. Provide critique of each theorem or experiment that is relevant to your judgment of the paper's novelty and quality.\\n6. Provide a summary judgment if the work is significant and of interest to the community.\\n\\n1. I am a researcher working at the intersection of machine learning and\\nbiological vision. I have experience with neural network models and\\nvisual neurophysiology.\\n\\n2. This paper makes two contributions: 1) It develops Brain-Score - a\\ndataset and error metric for animal visual single-cell recordings. 2)\\nIt develops (and brain-scores) a new shallow(ish) recurrent network\\nthat performs well on ImageNet and scores highly on brain-score. \\n\\n3. The development of Brain-Score is a useful invention for the field. A nice\\naspect of Brain-Score is that responses in both V4 and IT as well as behavioral\\nresponses are provided. I think it could be more useful if the temporal dynamics (instead of the\\nmean number of spikes) was included. This would allow to compare temporal\\nresponses in order to compare \\\"brain-like\\\" matches.\\n\\n4. This general idea is somewhat similar to a June 2018 Arxiv paper\\n(Task-Driven Convolutional Recurrent Models of the Visual System)\", \"https\": \"//arxiv.org/abs/1807.00053\\nbut this is a novel contribution as it is uses the Brain-Score dataset.\\n\\nOne limitation of this approach relative to the June 2018 ArXiv paper\\nis that the Brain-Score method is just representing the mean neural\\nresponse to each image - The Arxiv paper shows that different models\\ncan have different temporal responses that can also be used to decide\\nwhich is a closer match to the brain.\\n\\n5. More analysis of why CORNET-S is best among compact models would greatly\\nstrengthen this paper. What do the receptive fields look like? How do they compare\\nto the other models. What about the other high performing networks (e.g. DenseNet-169)?\\nHow sensitive are the results to each type of weight in the network? What about feedback connections\\n(instead of local recurrent connections)? \\n\\n6. This paper makes a significant contribution, in part due to the\\ndevelopment and open-sourcing of Brain-Score. The significance of the\\ncontribution of the CORnet-S architecture is limited by the\\nlack of analysis into what aspects make it better than other models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting attempt to bridge the gap between ML and neuroscience\", \"review\": \"In this interesting study, the authors propose a score (BrainScore) to (1) compare neural representations of an ANN trained on imagenet with primate neural activity in V4 and IT, and (2) test whether ANN and primate make the same mistakes on image classification. They also create a shallow recurrent neural network (Cornet) that performs well according to their score and also reasonably well on imagenet classification task given its shallow architecture.\\n\\nThe analyses are rigorous and the idea of such a score as a tool for guiding neuroscientists building models of the visual system is novel and interesting.\", \"major_drawbacks\": \"1. Uncertain contribution to ML: it remains unclear whether architectures guided by the brain score will indeed generalize better to other tasks, as the authors suggest.\\n\\n2. Uncertain contribution to neuroscience: it remains unclear whether finding the ANN resembling the real visual system most among a collection of models will inform us about the inner working of the brain.\", \"the_article_would_also_benefit_from_the_following_clarifications\": \"3. Are the recurrent connections helping performance of Cornet on imagenet and/or on BrainScore?\\n\\n4. Did you find a correlation between the neural predictivity score and behavioral predictivity score across networks tested? If yes, it would be interesting to mention.\\n\\n5. When comparing neural predictivity score across models, is a model with more neurons artificially advantaged by the simple fact that there is more likely a linear combination of neurons that map to primate neural activations? Is cross-validation enough to control for this potential bias?\\n\\n6. Fig1: what are the gray dots?\\n\\n7. \\u201cbut it also does not make any assumptions about significant differences in the scores, which would be present in ranking. \\u201c\\nWhat does this mean?\\n\\n8. How does Cornet compare to this other recent work: https://arxiv.org/abs/1807.00053 (June 20 2018) ?\", \"conclusion\": \"This study presents an interesting attempt at bridging the gap between machine learning and neuroscience. Although the impact that this score will have in both ML and Neuroscience fields remains uncertain, the work is sufficiently novel and interesting to be published at ICLR. I am fairly confident in my evaluation as I work at the intersection of deep learning and neuroscience.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"promising work, further tests needed\", \"review\": \"This is an interesting paper in which the authors propose a shallow neural network, the architecture of which was optimized to maximize brain score, and show that it outperforms other shallow networks at imagenet classification. The goal of this paper \\u2014 allowing brain data to improve our neural nets \\u2014 is great. The paper is well written (except for a comment on clarity right below), and presents an interesting take on solving this problem.\", \"it_is_a_little_unclear_how_the_authors_made_cornet_optimize_brain_score\": \"\\u201cHowever, note that CORnet-S was developed using Brain-Score as a guiding benchmark and although it was never directly used in model search or optimization, testing CORnet-S on Brain-Score is not a completely independent test.\\u201d Making these steps clearer is crucial for evaluating better what the model means. In the discussion \\u201cWe have tested hundreds of architectures before finding CORnet-S circuitry and thus it is possible that the proposed circuits could have a strong relation to biological implementations.\\u201d implies that the authors trained models with different architectures until the brain score was maximized after training. A hundred(s) times of training on 2760 + 2400 datapoints are probably plenty to overfit the brainscore datasets. The brain score is probably compromised after this, and it would be hard to make claims about the results on the brain modeling side. The author acknowledge this limitation, but perhaps a better thing is to add an additional dataset, perhaps one from different animal recordings, or from human fMRI?\\n\\nArguably, the goal of the paper is to obtain a model that overpowers other simpler models, and not necessarily to make claims about the brain. The interesting part of the paper is that the shallow model does work better than other shallow models. The authors mention that brain score helps CORnet be better at generalizing to other datasets. Including these results would definitely strengthen the claim since both brain score and imagenet have been trained on hundreds of times so far. \\n\\nAnother way to show that brain score helps is to show it generalizes above or differently from optimizing other models. What would have happened if the authors stuck to a simple, shallow model and instead of optimizing brain score optimized performance (hundreds of times) on some selected image dataset (this selection dataset is separate from imagenet, but the actual training is done on imagenet) and then tested performance on imagenet? Is the effect due to the brain or to the independent testing on another dataset?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJlt6oA9Fm | Selective Convolutional Units: Improving CNNs via Channel Selectivity | [
"Jongheon Jeong",
"Jinwoo Shin"
] | Bottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently. In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels. In particular, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks. During training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of (a) pruning unimportant channels, and (b) rewiring the pruned parameters to important channels. The rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it. Our experimental results demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures. | [
"convolutional neural networks",
"channel-selectivity",
"channel re-wiring",
"bottleneck architectures",
"deep learning"
] | https://openreview.net/pdf?id=SJlt6oA9Fm | https://openreview.net/forum?id=SJlt6oA9Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryx_vctQl4",
"rkesW6WlyN",
"ryemIJH_0Q",
"SygKcAVOCQ",
"SkezmpVOCm",
"Skeoah4OCX",
"HJe_GbTYpQ",
"BkgdeU8phQ",
"r1e0SGTY2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544948319870,
1543671042892,
1543159627308,
1543159440858,
1543159066118,
1543158979383,
1542209807790,
1541395951631,
1541161542412
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper822/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper822/Authors"
],
[
"ICLR.cc/2019/Conference/Paper822/Authors"
],
[
"ICLR.cc/2019/Conference/Paper822/Authors"
],
[
"ICLR.cc/2019/Conference/Paper822/Authors"
],
[
"ICLR.cc/2019/Conference/Paper822/Authors"
],
[
"ICLR.cc/2019/Conference/Paper822/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper822/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper822/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposed Selective Convolutional Unit (SCU) for improving the 1x1 convolutions used in the bottleneck of a ResNet block. The main idea is to remove channels of low \\u201cimportance\\u201d and replace them by other ones which are in a similar fashion found to be important. To this end the authors propose the so-called expected channel damage score (ECDS) which is used for channel selection. The authors also show the effectiveness of SCU on CIFAR-10, CIFAR-100 and Imagenet.\\n\\nThe major concerns from various reviewers are that the design seems the over-complicated as well as the experiments are not state-of-the-art. In response, the authors add some explanations on the design idea and new experiments of DenseNet-BC-190 on CIFAR10/100. But the reviewers\\u2019 major concerns are still there and did not change their ratings (6,5,5). Based on current results, the paper is proposed for borderline lean reject.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A new design for channel selection, yet over-complicated.\"}",
"{\"title\": \"After First Rebuttal/Revision\", \"comment\": \"Dear Reviewers and AC,\\n\\nWe hope that all of you checked our rebuttal/revision and we would be very happy to answer any remaining questions/concerns.\\n\\nThanks for your contribution to ICLR 2019,\\nAuthors\"}",
"{\"title\": \"Common response to all the reviewers\", \"comment\": \"We sincerely thank all the reviewers for their valuable comments and effort to improve our manuscript. Major revisions in the new draft are temporarily colored by \\u201cred\\u201d for the reviewers\\u2019 convenience. In below, we provide a brief summary of the major revisions:\\n\\n* Following the suggestion of AnonReviewer3, we have reduced the introductory part of bottleneck structure in Section 2.1. Instead, the space is devoted for describing how SCU is optimized in Section 2.4. \\n\\n* Following the suggestion of AnonReviewer2/3/4, We have added more experimental results on the state-of-the-art level model (namely, DenseNet-BC-190) in Table 2.\\n\\n* Based on the comment of AnonReviewer2, we have provided more detailed intuitions and motivations why we study the bottleneck structures in Section 2.1.\\n\\n* Section 2.2 is significantly revised to help readers for better understanding of CD and NC.\\n\\n\\nIn what follows, we respond several important concerns raised by multiple reviewers in common. \\n\\nQ1. The proposed method is overly complicated (AnonReviewer2/4).\\n- We fully understand your concern, and revised the draft (marked by \\u201cred\\u201d texts) significantly for better understanding. Irrespectively, the proposed method, SCU, is highly-modularized and one can use it easily for any bottleneck CNN architectures (without any deep understandings on it). In particular, our PyTorch implementation of SCU works just like a standard convolutional layer, and one can convert an existing model into the SCU-counterpart by simply modifying <10 lines of code. We plan to open our source code after the paper decision, and we believe that this will further help the readers to understand the details of our method.\\n\\nA way to understand our method better is to \\u201cseparate\\u201d the complexity coming from NC apart from our method. This is because NC is a component for efficiency rather than our core idea. In other words, the complexity coming from NC can be replaced with other methods as long as they provide the efficiency. We revised the draft to emphasize this point (see the NC part of Section 2.2).\\n\\nQ2. The results are not the state-of-the-art (AnonReviewer2/3/4).\\n- We believe that the effectiveness of our method is not limited to the model size. In the revision, we additionally perform a set of experiments on DenseNet-BC-190 with mixup augmentation [1], which showed the state-of-the-art level performance on CIFAR-10/100. The new results show that SCU is still beneficial to use, in a sense that: (a) S+D shows significant reduction of model parameters, (b) there is a meaningful improvement in accuracy from S+D to S+D+R, which indicates that realloc meaningfully utilized the pruned parameters, and (c) S+D+R consistently improves both accuracy and compression compared to the bare baseline, e.g., for CIFAR-10, we achieved reductions in (compression, error) = (-53.1%, -1.10%). \\n\\nWe emphasize again that our goal is neither to improve accuracy nor pruning alone. It is for improving the network efficiency that allows some balancing between accuracy and compression on demand. We believe it is an important problem rarely studied in the literature and we provide an important and novel step toward it.\\n\\n[1] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Many thanks for your time and effort to review our paper. We respond to your questions and concerns one-by-one in what follows. In addition, please check out the common response we have posted together, that addresses several important concerns raised by multiple reviewers in common.\\n\\nQ1. \\u201cThe parameter flops reduced rate seems not very impressive.\\u201d \\n- As we mentioned in our common response, \\u201cimproving pruning efficiency\\u201d is not our major goal. Even though we place NC for inducing more sparsity on training (as pruning efficiency depends on training scheme), this is far from our key contributions, but closer to adopting Bayesian learning for better efficiency. Rather, we aim to explore a \\u201csafer way\\u201d of pruning and rewiring, without any post-processing after training. Namely, our goal is to achieve accuracy improvement and model compression simultaneously on a single pass of training. In this regard, our work is more about designing a new training scheme, complementary to any pruning works, for improving the network efficiency that allows some balancing between accuracy and compression on demand. \\n\\nQ2. \\u201cCan the SCU accelerate the forward process?\\u201d\\n- Although implementing SCU for maximal acceleration is outside our scope, we expect that our method will do help on acceleration of networks at the inference time. Recall that SCU has two additional layers (NC and CD) compared to the standard BN-ReLU-Conv. The complexity from the layers, however, can be eliminated for those who need efficient inference:\\n\\n (a) As mentioned in Section 2.4, noises in NC can be replaced by its expected values for faster inference. In fact, even the entire NC layer can be omitted by multiplying the expected values to the parameters in the former BN layer.\\n \\n (b) In the case that SCU is trained using only dealloc (for compression), CD contains no spatial shifting so that the role of CD is nothing but re-indexing channels.\\n\\nOverall, the only expense of using SCU is a channel-selection layer via tensor indexing operation, while the remaining layers can work in a much smaller dimension in return. Comparative evaluations of CPU inference time in the table below show that SCU further improves the efficiency of CondenseNet-182 model through the training process, while outperforming other competitive models. \\n\\nCPU* inference time (per image)\\n+-----------------------------------+-------------+--------------+----------+\\n| Model | Before | After | Error |\\n| | training | training | rates |\\n+-----------------------------------+-------------+--------------+----------+\\n| ResNeXt-29 | - | 471.2ms | 3.58% |\\n+-----------------------------------+-------------+--------------+----------+\\n| DenseNet-BC-250 (k=24) | - | 399.5ms | 3.64% |\\n+-----------------------------------+-------------+--------------+----------+\\n| CondenseNet-SCU-182 | 114.8ms |*52.5ms* | 3.63% |\\n| | | (-54.3%) | |\\n+-----------------------------------+-------------+--------------+----------+\\n*Intel Xeon E5-2630v4 @ 2.20GHz\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Many thanks for your time and effort to review our paper. We respond to your questions and concerns one-by-one in what follows. In addition, please check out the common response we have posted together, that addresses several important concerns raised by multiple reviewers in common.\\n\\nQ1. For your editorial suggestions\\n- Many thanks for your thoughtful editorial suggestions. We revised the draft following them, where major revisions are marked by \\u201cred\\u201d.\\n\\nQ2. \\u201cHow is the discrete \\\\pi optimized in the training process?\\u201d\\n- As we describe in Section 2.4, \\\\pi is not directly trained via SGD, but updated via dealloc and realloc operations during training. \\n\\nQ3. \\u201cThe proof of proposition 1 does not look correct to me.\\u201d\\n- We remark that Proposition 1 does not involve anything related to optimization with X. Proposition 1 can be applied regardless on whether the network is trained or not (even in the case that the network is randomly initialized). Given that the network is fixed, \\\\theta is completely independent on the distribution of X by design of NC, so that we can factorize the expectation.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Many thanks for your time and effort to review our paper. We respond to your questions and concerns one-by-one in what follows. In addition, please check out the common response we have posted together, that addresses several important concerns raised by multiple reviewers in common.\\n\\nQ1. \\\"I don't see clear motivation for re-using the same features.\\\"\\n- The motivation for the re-using is to give important features more parameters. As explained in Section 2.1, notice that a 1x1 convolution performs nothing but a \\\"pixel-wise linear transformation\\\" on feature dimension, so that its parameters can be represented by a N x N\\u2019 matrix, provided that N=(# input channels) and N\\u2019=(# output channels). This implies that a single input channel is processed by 1 x N\\u2019 parameters when an input comes into the layer, and therefore re-using a feature n times implies that the feature is processed by n x N\\u2019 parameters. \\n\\nQ2. \\\"I did not understand the usefulness of applying the spatial shifting.\\\"\\n- As explained in Section 2.2 in more details, spatial shifting is a trick to properly utilize the re-used parameters. Considering again that n copies of a feature occupies n x N\\u2019 parameters of the matrix (Q1-1 above), one may notice that the naive copy would not help on expressivity of the convolution, since it is basically a linear transformation. Even though SCU contains ReLU inside the structure, this kind of phenomenon does happen during training. By using spatial shifting, we now can utilize the n x N\\u2019 parameters for \\\"enlarging\\\" the convolution kernel specially for the feature. Ablation study on spatial shifting demonstrated in Figure 3a clearly shows its effectiveness. \\n\\nQ3. \\\"It is also not clear whether the proposed technique is applicable to only bottleneck layers.\\\"\\n- We expect that the proposed method is still valid for other than bottleneck (it is mentioned in Section 4). Nevertheless, we primarily focus on the bottleneck setting under the presence of identity connection, because we expect this scenario is one of the best applications of channel-selectivity. In Section 2.1 of the revised draft, we provide more detailed intuitions and motivations why we study such bottleneck layers.\\n\\nQ4. \\u201cThe gain in reducing the model parameters is not that great as the R parameters are only a small fraction of the total model parameters.\\u201d\\n- The fraction of bottlenecks for the total parameters is NOT always small, and several state-of-the-art models invest very large portion of parameters on bottlenecks as follows:\\n\\n (a) CondenseNet-SCU-182 model presented in Table 3 is a nice example of achieving high efficiency by exploiting its high fraction of bottlenecks. Initially, CondenseNet-SCU-182 has 6.29M parameters with 741M FLOPs in total before training the model. As reported in Table 3, these values can be reduced to 2.59M and 286M, respectively, and this reduction is mainly due to compression on the bottlenecks. In fact, this model invests 5.89M parameters only for bottlenecks, which is *93.7%* of the total parameters. \\n\\n (b) DenseNet-BC-190, newly added in the revised draft as the state-of-art model also invests a lot of parameters for bottleneck, namely 17.5M (as reported in Table 1) out of 25.6M. In general, DenseNet models heavily rely on bottleneck structure for efficiency, and the overhead from the bottleneck itself becomes increasingly large as the model grows. \\n\\nAs the examples demonstrate, reducing overhead from bottlenecks has been one of the crucial barriers for designing a large-scale, yet efficient CNN model.\"}",
"{\"title\": \"interesting idea, but really hard to read\", \"review\": \"This paper propose Selective Convolutional Unit (SCU), which can replace the bottleneck in Resnet block. The difference between SCU and bottleneck is that SCU adds Channel Distributor (CD) and Noise Controller (NC) to reduce and replace the channels. This paper also propose Expected channel damage score (ECDS) to measure the importance of a channel to decide weather remove or replace it. Then the experiment shows result on cifar10/100 and imagenet data set with different network architectures.\\nThe idea is interesting, however, the parameter flops reduced rate seems not very impressive. The SCU seems too complicated,so I want to know that if the SCU could accelerate the forward process on modern GPU or mobile devices? \\nThe result of these networks seems not the state-of-the-art, if the result can be improved, the SCU could be more convincing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"some concerns need to be clarified\", \"review\": \"This is an architecture design paper. It proposes a general structure called Selective Convolutional Unit that the authors claim to be useful for various CNN models. The SCU structure contains two major parts: CD and NC. CD for compressing/pruning channels and NC for multiplicative noise. The paper gives a measure, called expected channel damage score, on the change of the output for SCU. It also shows the effectiveness of SCU on CIFAR-10, CIFAR-100 and imagenet.\", \"some_questions_and_concerns\": \"1. The paper spends too much space introducing the bottleneck structures and a whole lot of the details on the optimization of NC and CD are put in the appendix. I would suggest to reduce the section of introductory part and put a shorter version of appendix A and B to the main text so that the readers know more about the architecture and how it is optimized. In particular, the description on NC is confusing since without looking at the appendix it is not clear how the prior p(\\\\theta) is used. \\n\\n2. The experiment shows improvement on densenet and resnetXT, but the result is not the state-of-the-art. Wide-Resnet seems to get better accuracy on both CIFAR-10 and CIFAR-100 compared to the best accuracy reported in this paper. Also the number reported by the original densenet paper on imagenet seems to be better (densenet-264 has an error rate of 22.15/20.80)\\n\\n3. In your CD design, channel assignment \\\\pi is a discrete variable. How is it optimized in the training process?\\n\\n4. The proof of proposition 1 does not look correct to me. The optimization procedure makes use of the data X to determine your NC variable \\\\theta so \\\\theta depends on X. In this way you cannot factorize the expectation in the equation below (20) in your appendix.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"promising idea but over-complicated method.\", \"review\": \"The main contribution of the paper are a set of new layers for improving the 1x1 convolutions used in the bottleneck of a ResNet block. The main idea is to remove channels of low \\u201cimportance\\u201d and replace them by other ones which are in a similar fashion found to be important. To this end the authors propose the so-called expected channel damage score (ECDS) which is used for channel selection. The authors have shown in their paper that the new layers improve performance mainly on CIFAR, while there\\u2019s also an experiment on ImageNet\\nIt looks to me that the proposed method is overly complicated. It is also described in a complicated manner. I don't see clear motivation for re-using the same features. Also I did not understand the usefulness of applying the spatial shifting of the so-called Channel Distributor. It is also not clear whether the proposed technique is applicable to only bottleneck layers.\\nThe results show some improvement but not great and over results that as far as I know are not state-of-the-art (to my knowledge the presented results on CIFAR are not state-of-the-art). The results on ImageNet also show decent but not great improvement. Moreover, the gain in reducing the model parameters is not that great as the R parameters are only a small fraction of the total model parameters. Overall, the paper presents some interesting ideas but the proposed approach seems over-complicated\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BJgK6iA5KX | AutoLoss: Learning Discrete Schedule for Alternate Optimization | [
"Haowen Xu",
"Hao Zhang",
"Zhiting Hu",
"Xiaodan Liang",
"Ruslan Salakhutdinov",
"Eric Xing"
] | Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters. Appropriately scheduling the optimization of a task objective or a set of parameters is usually crucial to the quality of convergence. In this paper, we present AutoLoss, a meta-learning framework that automatically learns and determines the optimization schedule. AutoLoss provides a generic way to represent and learn the discrete optimization schedule from metadata, allows for a dynamic and data-driven schedule in ML problems that involve alternating updates of different parameters or from different loss objectives.
We apply AutoLoss on four ML tasks: d-ary quadratic regression, classification using a multi-layer perceptron (MLP), image generation using GANs, and multi-task neural machine translation (NMT). We show that the AutoLoss controller is able to capture the distribution of better optimization schedules that result in higher quality of convergence on all four tasks. The trained AutoLoss controller is generalizable -- it can guide and improve the learning of a new task model with different specifications, or on different datasets. | [
"Meta Learning",
"AutoML",
"Optimization Schedule"
] | https://openreview.net/pdf?id=BJgK6iA5KX | https://openreview.net/forum?id=BJgK6iA5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJgG8g-Kx4",
"SkxBNIaEeV",
"BkerpAhxlN",
"Bke9KJljRX",
"B1lSm_S9AX",
"SJl_vpqYRQ",
"rkgn_XcxRm",
"ByxRxkQFpX",
"BJlq-BGFa7",
"rylM2EzFpX",
"ByeGCYgKTX",
"BklC6-auT7",
"rked778JaQ",
"BkgdU-e16m",
"HklV1vNp2m"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545306185669,
1545029164593,
1544765117454,
1543335810023,
1543292957349,
1543249247780,
1542656883850,
1542168310186,
1542165761757,
1542165674391,
1542158793657,
1542144453557,
1541526304041,
1541501264458,
1541387996234
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"~Fei_Tian1"
],
[
"ICLR.cc/2019/Conference/Paper821/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper821/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/Authors"
],
[
"ICLR.cc/2019/Conference/Paper821/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper821/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper821/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for pointing us to your work and we will cite your paper in a future version\", \"comment\": \"Thank you for pointing us to your work [1], which studies the similar topic concurrently with us. Both works focus on designing methods to introducing dynamicas into objectives/loss functions. Specifically, [1] tries to directly cast the objective function as a a learnable neural network (learned by measuring the similarity between model prediction and ground-truth). By contrast, we focus on learning the update schdules (parameterized as NNs) in problems where multiple objectives or/and sets of parameters are involved. Our formulation allows for tackling alternate optimization problems such as (1) GANs, where multiple objectives have clear difference with each other and are combined in a minimax form; (2) multi-task learning, that each objective of interest is well-defined and prefixed but an update order is missing; (3) or even EM-based maximum likelihood estimation where some inference procedures involved (e.g. MCMC) aren't in the form of a gradient-based optimization -- In all these cases, the objective itself might be difficult to be represented or approximated by neural networks. We will cite your paper in a future version and include the above discussion.\\n\\n[1] Wu, L., Tian, F., Xia, Y., Fan, Y., Qin, T., Jian-Huang, L., & Liu, T. Y. (2018). Learning to Teach with Dynamic Loss Functions. In Advances in Neural Information Processing Systems (pp. 6465-6476).\"}",
"{\"comment\": \"Dear the authors,\\n\\nThank you for referring to our ICLR'18 work \\\"Learning to Teach\\\" in your work. We have an extension of L2T in NeurIPS this year: \\\"Learning to Teach with Dynamic Loss Functions\\\" (https://papers.nips.cc/paper/7882-learning-to-teach-with-dynamic-loss-functions.pdf), which studies the automatic discovery of better objectives/loss functions adaptively in the optimization process, and therefore is quite related with your work. It'll be more comprehensive to position this one in your paper. Thanks.\\n\\nBest,\\nFei Tian\", \"title\": \"Thank you for referring to L2T\"}",
"{\"metareview\": \"The paper suggests using meta-learning to tune the optimization schedule of alternative optimization problems. All of the reviewers agree that the paper is worthy of publication at ICLR. The authors have engaged with the reviewers and improved the paper since the submission. I asked the authors to address the rest of the comments in the camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Thanks for the suggestion!\", \"comment\": \"Thanks for the great suggestions again! We're working on generating new results using suggested metrics on the GAN and NMT experiments and will add the new results in the next version.\"}",
"{\"title\": \"Thanks for the comments!\", \"comment\": \"Thanks for the comments again! We have fixed this typo in the latest version.\"}",
"{\"title\": \"Thanks for clarifying the points, but stronger evaluation will be better.\", \"comment\": \"Thanks the authors for addressing my comments. I\\u2019ve adjusted my score accordingly. I still think there are some weakness in terms of evaluation.\\n1. IS is not the only qualitative metric in GAN and DCGAN is not the state-of-the-art baseline. I would be curious to see the how does AutoLoss perform using some more recent GAN architectures. In addition to IS, FID score is also a recent complimentary metric to show the effectiveness. \\n2. I understand the response from comment 5, but reporting the metric that the community care about is also import. Sometimes, PPL is not directly correlated with BLEU or other indirect measure. Without reporting proper metrics, it is hard to know how the approach performed compared to Niehues & Cho 2017.\"}",
"{\"title\": \"Interesting\", \"comment\": \"Thank you for your detailed comments.\\n\\nThe addition of the appendix sections will greatly aid in reproducibility!\\n\\n@ Horizon bias: Interesting that you observe in GAN but not in MNT.\", \"one_other_small_typo\": \"A.8. Double reference to Algorithm 1 in GAN section. You probably mean one to be Algorithm 2.\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"We thank all reviewers for giving valuable feedback to this paper. We have uploaded a revised manuscript in which we have incorporated the suggestions from the comments.\", \"we_want_to_highlight_the_following_revisions\": [\"We have added to Appendix A.1 the detailed algorithm how PPO is incorporated into AutoLoss.\", \"Add Appendix A.8 to disclose detailed hyperparameters to produce the presented results.\", \"Add Appendix A.9 to discuss the potential limitations of AutoLoss, as suggested by AnonReviewer3.\", \"We have updated Figure.4(b) to a scatter plot for clarity, suggested by AnonReviewer4.\", \"We have added several references suggested by AnonReviewer4 and revised several claims to be more accurate.\"]}",
"{\"title\": \"Response to AnonReviewer4 -- continued\", \"comment\": \">> Comment #8, #9\\nThanks for pointing us to these two works. In [1], the authors investigate several features and develop a controller that can adaptively adjust the learning rate of the ML problem at hand, similarly in a data-driven way. In [2], the authors propose to manually balance the training of G and D by monitoring how good G and D are, assessed by three quantities and realized by simple thresholding. By contrast, AutoLoss offers a more generic way to parametrize and learn the update schedule. Hence, AutoLoss fits into more problems (as we\\u2019ve shown in the paper).\\nWe have appropriately revised the two claims and cited them in the latest version.\\n\\n>> Comment #10\\nEmpirically, IS^2 or IS do not make much difference on the performance. The scaling term is a flexible parameter that controls the scale of the reward which we do not tune very much though.\\n\\n>> Comment #12\\nYes, in WGAN, it is preferable to train the critic till optimality. We have revised the statement for accuracy -- we observe in our experiments, for DCGANs with the vanilla GAN objective (JSD), more generator training than discriminator training generally performs better (but this may not be an effective hint for other GAN objectives as they behave very differently).\\n\\n>> Comment #13\\nWe have added Appendix A.8 to disclose all hyperparameters. All code and model weights used in this paper will be made available. \\n\\n>> Comment #14\\nWe\\u2019ve revised our statements to be more accurate: for all GANs and NMT experiments, we observe AutoLoss reaches better final convergence; For GAN 1:1, GAN 1:9, AutoLoss trains faster; for NMT experiments, AutoLoss not only trains faster but also converges better.\\n\\nWe\\u2019d like to clarify that for all our GANs and NMT experiments, the stopping criteria of an experiment is either divergence or when we don\\u2019t observe improvement of convergence for 20 continuous epochs. This is why in Fig.2, Fig.3(L) and Fig.4(c), it looks like that different methods are given different training time.\\n\\n>> Comment #15\\nWe have update Figure.4(b) to a scatter plot, and fixed mentioned typos in the current version.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thanks for the detailed and encouraging feedback! We reply all comments below (relevant ones are put together):\\n\\n>> Comments #1, #11\\nWe mainly account the success of this simple training strategy to the simplicity of the model, the relatively low dimensionality of our input features, and the simplified action space (though all three suffice to obtain a good controller in the current settings). They make the training of the controller much easier compared to other RL tasks with higher dimensional features or larger output space.\\n\\nWe have added the detailed PPO-based training algorithm in Appendix A.1. While AutoLoss is amenable to different policy optimization algorithms, we empirically find PPO performs better on NMT, but REINFORCE performs better on GANs. As to the online setting, thanks for pointing us to the \\u201cshort-horizon bias\\u201d paper. We have indicated in the revision the existence of this bias -- this bias was observed on the GAN task -- overtraining G can increase IS in a short term, but may lead to divergence in a long term as G becomes too strong. On the other hand, we didn\\u2019t observe it harms on NMT task noticeably. We hypothesize the tradeoff is insignificant on NMT, as in our multi-task setting, slightly over-optimizing one task objective usually does not have irreversible negative impact on the MT model (as long as the other objectives are optimized appropriately later on). \\n\\n>> Comments #2, #3\\nWe\\u2019d like to clarify that S=1 is consistent in the overhead section and Algorithm.1. S controls how many sequences to generate to perform a (batched) policy update (i.e. S is the batch size), and we set S=1 for all tasks. Only T differs across tasks, but we always update \\\\phi whenever a reward is generated.\\n\\nBack to comment #2: for regression and classification, we have experimented with larger S and found the improvement marginal. As each reward is generated via an independent experiment, the correlations among gradients are unobvious. For large-scale tasks, we use memory replay to alleviate correlations in online settings (please see Algorithm 2 in Appendix A.1 in our revised version). \\nPerforming batched update with a larger S might help reduce correlations; However, a large S, as a major drawback, requires performing ST (S>>1) steps of task model training, in order to perform one step of controller update. This yields better per-step convergence, but longer overall training (wallclock) time for the controller to converge. There might exist sweet spots for S where one can achieve both good per-step convergence and short training time, but we skip the search of S and simply use S=1 as it performs well. \\nIt is worth noting that some recent literature uses a stochastic estimation of the policy gradient with batch size 1 as well, and report strong empirical results [1].\\n\\n[1] Efficient Neural Architecture Search via Parameter Sharing. ICML 2018\\n\\n>> Comment #4\\nWe observe the controller performance on all 4 tasks are insensitive to initialization. A good initialization (e.g. in NMT, equally assigning probabilities to each loss at the start of the training) indeed leads to faster learning, but most experiments with random initializations manage to converge to a good optima, thanks to \\\\epsilon-greedy sampling used in training.\\n\\n>> Comment #5\\nThey are the same -- there is a typo leading to confusion in the sentence \\u201c...in Figure 1 where we set different \\\\lambda in l_2 = \\\\lambda |\\\\Theta|_2...\\u201d; which should be \\u201c...in Figure 1 where we set different \\\\lambda in l_2 = \\\\lambda |\\\\Theta|_1...\\u201d. We have fixed it in the latest version.\\n\\n>> Comment #6\\nPlease see the last paragraph in page 5. For regression, classification and NMT, we split data into 5 partitions D_{train}^C, D_{val}^C, D_{train}^T, D_{val}^T, D_{test}. AutoLoss uses D_{train}^C and D_{val}^C to train the controller. Once trained, the controller guides the training of a new task model on another two partitions D_{train}^T, D_{val}^T. Trained task models are evaluated on D_{test}. Baseline methods use the union of D_{train}^C, D_{val}^C, D_{train}^T, D_{val}^T for training/validation. For GANs that do not need a validation or test set, we follow the same setting in [1] for all methods.\\n\\n[1] Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. ICLR 2016.\\n\\n>> Comment #7\\nThanks for pointing out -- we apologize for misusing \\u201cexploding or vanishing gradients\\u201d and have revised the paper to be accurate. We simply intended to clip the reward to reduce variances, and fount it effectively improved training.\"}",
"{\"title\": \"Thanks for the comments but some of your criticisms are invalid\", \"comment\": \"We have fixed the footnote and capitalization problems. Below are replies to other comments.\\n\\n>> Comment #1\\nWe agree vanilla REINFORCE can exhibit high variance. However, as we have elaborated in the text below Eq.2, to reduce the variance and stabilize the training, we have made the following adaptations referring to previous works [1,2]:\\n- Substitute a moving average B (defined in text) from the reward\\n- Clip the final reward to a given range\\nWe empirically found the two techniques significantly stabilize the controller training.\\nMoreover, AutoLoss is not restricted to REINFORCE, but open to any off-the-shelf policy optimization method, e.g. for large-scale tasks such as NMT, we introduce PPO to replace REINFORCE, and adjust the reward generation scheme accordingly (see the paragraph \\u201cDiscussion\\u201d). We\\u2019ve also revised Appendix A.1 to cover details of how PPO is incorporated. Empirically, with random parameter initialization most experiments manage to converge and give fairly good controllers. \\n\\nAlmost all main results are averaged over multiple runs as explicitly indicated in the main text and the table or figure captions (e.g. see captions of Table.1 and Fig.2). See Fig.2 and Fig.3(R) where vertical bars indicate variances. We have also updated Table.1 to show the variance. \\n\\nWe will release all code and trained models for reproducibility. \\n\\n>> Comment #2\\nWe have provided substantial analysis and visualizations on what AutoLoss has learned in our *initial submission*. Below, we summarize them for your reference:\\n\\n- d-ary regression and MLP classification\\n*See sec 5.1, the 3rd paragraph in P6 for analysis, and Table.1 for comparisons to handcrafted schedules*: we observe AutoLoss optimizes L1 whenever needed during the optimization. By contrast, linear combination objectives optimize both at each step while handcrafted schedules (e.g. S1-S3) optimize L1 strictly following the given schedule, ignoring the optimization status. We believe AutoLoss manages to detect the potential risk of overfitting using designed features, and combat it by optimizing L1 only when necessary.\\n- GANs\\nPer our observation, AutoLoss gives more flexible schedules than manually designed ones. It can determine when to optimize G or D by being aware of the current optimization status (e.g. how G and D are balanced) using its parametric controller.\\n- NMT\\n*See sec 5.1, the 3rd paragraph in P7 and Fig.3(M)*: we have explicitly visualized in Fig.3(M) the softmax output of a learned controller and explain in text: \\u201c...the controller meta-learns to up-weight the target NMT objective at later phase\\u2026resemble the \\u201cfine-tuning the target task\\u201d strategy...\\u201d.\\n\\n>> Comment #3\\nWe experimented with S>1 and found the improvement marginal. However, a large S requires more task model training steps to perform one PG (or PPO) update, meaning longer overall wallclock time for the controller to converge. We hence use S=1 as it performs satisfactorily. Note that some recent meta-learning literature uses policy gradient with batchsize 1, and report strong empirical results [3].\\n\\n>> Comment #4\\nWe\\u2019d like to clarify that we have *not* claimed that \\u201cAutoLoss can resolve mode collapse in GANs\\u201d. AutoLoss improves the performance of GANs by enabling an adaptive optimization schedule than a pre-fixed one. Our point is better and faster convergence of the model training. In the GAN experiments we *qualitatively* observed the generated images are of satisfying quality and exhibit no mode collapse. But we never claimed we aim to or can resolve mode collapse.\\n\\n>> Comment #5\\nWe respectfully disagree with this comment. The NMT experiments aim to verify that AutoLoss can guide the multi-task optimization toward faster and better convergence on the target task, i.e. our interest is to see how the optimization goes instead of how the MT performs. Held-out PPL is the direct indicator of the quality of convergence, while BLEU evaluates the MT performance. Hence we believe PPL suffices as a metric to evaluate the performance of AutoLoss.\\n\\n>> Comment #6\\nWe acknowledge that there may exist DCGAN implementations that achieve higher IS on CIFAR-10, but note the following facts:\\n- The link verifies in a table that the best official IS (reported in literature) is 6.16 (the number we report).\\n- The self-implemented DCGAN 1:1 baseline used in our paper (see Fig.4(c)) achieves an IS=6.7, higher than 6.16.\\n- Still, AutoLoss-guided DCGAN achieves IS=7, higher than 6.16 reported in literature, our own implementation, and the result from your link.\\n\\nThanks again for mentioning spectral norm. However, these techniques are *completely orthogonal* from the scope of this paper, where we focus on whether AutoLoss can improve the convergence instead of resolving mode collapse. \\n[1] Device Placement Optimization with Reinforcement Learning. ICML\\u201917\\n[2] Neural Optimizer Search with Reinforcement Learning. ICML\\u201917\\n[3] Efficient Neural Architecture Search via Parameter Sharing. ICML\\u201918\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the valuable and encouraging feedback! Below, please see our replies.\\n\\n>> What are the key limitations of AutoLoss? Did we observe some undesirable behavior of the learned optimization schedule, especially when transfer between different datasets or different models ? More discussions on these questions can be very helpful to further understand the proposed method. \\n\\nThese are indeed good questions. We list several limitations we discovered during the development of AutoLoss:\\n- Bounded transferability\\nWe observe AutoLoss has bounded transferability -- while we successfully transfer a controller across different CNNs, we can hardly transfer a controller trained for CNNs to RNNs. This is slightly different from some related AutoML works, such as in [1], where auto-learned neural optimizers are able to produce decent results on even different families of neural networks. We hypothesize that the optimization behaviors or trajectories of CNNs and RNNs are very different, hence the function mappings from status features to actions are different. We leave it as a future work to study where the clear boundary is.\\n- Design white-box features to capture optimization status\\nAnother limitation of AutoLoss is the necessity of designing the feature vector X, which might require some prior knowledge on the task of interest, such as being aware of a rough range of the possible values of validation metrics, etc. In fact, We initially experimented with directly feeding blackbox features (e.g. raw vectors of parameters, gradients, momentum, etc.) into controller, but found they empirically contributed little to the prediction, and sometimes hindered transferability (as different models have their parameter or gradient values at different scales).\\n- Non-differentiable optimization\\nMeta-learning discrete schedules involves non-differentiable optimization, which is by nature difficult. Therefore, a lot of techniques in addition to vanilla REINFORCE are required to stabilize the training. Please also see our answer to the next question for more details.\\nAs a potential future work, we will seek for continuous representations of the update schedules and end-to-end training methodologies, as arisen in recent works [2].\\n\\nWe haved add the above discussion to the latest version as Appendix A.9.\\n\\n>> As the problem is formulated as an RL problem, which is well-known for its difficulty in training, did we encounter similar issues? More details in the implementation can be very helpful for reproducibility. \\n>> Any plan for open source?\\n\\nWe acknowledge the difficulties of training controllers using vanilla REINFORCE. During our development of the training algorithm (See Eq.2, the \\u201cdiscussion\\u201d section in Sec.4, and Appendix A.1), we found the vanilla form of REINFORCE algorithm leads to unstable training. We therefore have made many improvements and adaptations by either referring to existing literature, or depending on the specific tasks. They include:\\n- Substitute from the reward a baseline term, which is a moving average (see section 3, Eq.2)\\n- Reward clipping (see section 3, under Eq.2)\\n- Use different values of T for different tasks (see \\u201cdiscussion\\u201d in section 4)\\n- Use improved training algorithms (e.g. PPO) for more challenging tasks, and slightly adjust reward generation schemes (see \\u201cdiscussion\\u201d in section 4, and Appendix A.1).\\n\\nWe have also revised the submission to disclose more details on how we make these improvements. We will make all code and models trained in this paper available for reproducibility.\\n\\n[1] Neural optimizer search with reinforcement learning. ICML 2017.\\n[2] DARTS: Differentiable Architecture Search. Arxiv 1806.09055.\"}",
"{\"title\": \"Interesting idea. Clear paper.\", \"review\": \"Summary: This paper proposes a meta-learning solution for problems involving optimizing multiple loss values. They use a simple (small mlp), discrete, stochastic controller to control applications of updates among a finite number of different update procedures. This controller is a function of heuristic features derived from the optimization problem, and is optimized using policy gradient either exactly in toy settings or in a online / truncated manor on larger problems. They present results on 4 settings: quadratic regression, MLP classification, GAN, and multi-task MNT. They show promising performance on a number of tasks as well as show the controllers ability to generalize to novel tasks.\\n\\nThis is an interesting method and tackles a impactful problem. The setup and formulation (using PG to meta-optimize a hyper parameter controller) is not extremely novel (there have been similar work learning hyper parameter controllers), but the structure, the problem domain, and applications are. The experimental results are through, and provide compelling proof that this method works as well as exploration as to why the method works (analyzing output softmax). Additionally the \\\"transfer to different models\\\" experiment is compelling.\", \"comments_vaguely_in_order_of_importance\": \"1. I am a little surprised that this training strategy works. In the online setting for larger scale problems, your gradients are highly correlated and highly biased. As far as I can tell, you are performing something akin to truncated back back prop through time with policy gradients. The biased introduced via this truncation has been studied in great depth in [3] and shown to be harmful. As of now, the greedy nature of the algorithm is hidden across a number of sections (not introduced when presenting the main algorithm). Some comment as to this bias -- or even suggesting that it might exist would be useful. As of now, it is implied that the gradient estimator is unbiased.\\n\\n2. Second, even ignoring this bias, the resulting gradients are heavily correlated. Algorithm 1 shows no sign of performing batched updates on \\\\phi or anything to remove these corrections. Despite these concerns, your results seem solid. Nevertheless, further understanding as to this would be useful.\\n\\n3. The structure of the meta-training loop was unclear to me. Algorithm 1 states S=1 for all tasks while the body -- the overhead section -- you suggest multiple trainings are required ( S>1?).\\n\\n4. If the appendix is correct and learning is done entirely online, I believe the initialization of the meta-parameters would matter greatly -- if the default task performed poorly with a uniform distribution for sampling losses, performance would be horrible. This seems like a limitation of the method if this is the case.\\n\\n5. Clarity: The first half of this paper was easy to follow and clear. The experimental section had a couple of areas that left me confused. In particular:\\n5.1/Figure 1: I think there is an overloaded use of lambda? My understanding as written that lambda is both used in the grid search (table 1) to find the best loss l_1 and then used a second location, as a modification of l_2 and completely separate from the grid search?\\n\\n6. Validation data / test sets: Throughout this work, it is unclear what / how validation is performed. It seems you performing controller optimization (optimizing phi), on the validation set loss, while also reporting scores on this validation set. This should most likely instead be a 3rd dataset. You have 3 datasets worth of data for the regression task (it is still unclear, however, what is being used for evaluation), but it doesn't look like this is addressed in the larger scale experiments at all. Given the low meta-parameter count of the I don't think this represents a huge risk, and baselines also suffer from this issue (hyper parameter search on validation set) so I expect results to be similar. \\n\\n7. Page 4: \\\"When ever applicable, the final reward $$ is clipped to a given range to avoid exploding or vanishing gradients\\\". It is unclear to me how this will avoid these. In particular, the \\\"exploding\\\" will come from the \\\\nabla log p term, not from the reward (unless you have reason to believe the rewards will grow exponentially). Additionally, it is unclear how you will have vanishing rewards given the structure of the learned controller. This clipping will also introduce bias, this is not discussed, and will probably lower variance. This is a trade off made in a number of RL papers so it seems reasonable, but not for this reason.\\n\\n8. \\\"Beyond fixed schedules, automatically adjusting the training of G and D remains untacked\\\" -- this is not 100% true. While not a published paper, some early gan work [2] does contains a dynamic schedule but you are correct that this family of methods are not commonplace in modern gan research.\\n\\n9. Related work: While not exactly the same setting, I think [1] is worth looking at. This is quite similar causing me pause at this comment: \\\"first framework that tries to learn the optimization schedule in a data-driven way\\\". Like this work, they also lean a controller over hyper-parameters (in there case learning rate), with RL, using hand designed features.\\n\\n10. There seem to be a fair number of heuristic choices throughout. Why is IS squared in the reward for GAN training for example? Why is the scaling term required on all rewards? Having some guiding idea or theory for these choices or rational would be appreciated.\\n\\n11. Why is PPO introduced? In algorithm 1, it is unclear how PPO would fit into this? More details or an alternative algorithm in the appendix would be useful. Why wasn't PPO used on all larger scale models? Does the training / performance of the meta-optimizer (policy gradient vs ppo) matter? I would expect it would. This detail is not discussed in this paper, and some details -- such as the learning rate for the meta-optimizer I was unable to find.\\n\\n12. \\\"It is worth noting that all GAN K:1 baselines perform worse than the rest and are skipped in Figure 2, echoing statements (Arjovsky, Gulrajani, Deng) that more updates of G than D might be preferable in GAN training.\\\" I disagree with this statement. The WGAN framework is built upon a loss that can be optimized, and should be optimized, until convergence (the discriminator loss is non-saturating) -- not the reverse (more G steps than D steps) as suggested here. Arjovsky does discuss issues with training D to convergence, but I don't believe there is any exploration into multiple G steps per D step as a solution.\\n\\n13. Reproducibility seems like it would be hard. There are a few parameters (meta-learning rates, meta-optimizers) that I could not find for example and there is a lot of complexity.\", \"14\": \"Claims in paper seem a little bold / overstating. The inception gain is marginal to previous methods, and trains slower than other baselines. This is also true of MNT section -- there, the best baseline model is not even given equal training time! There are highly positive points here, such as requiring less hyperparameter search / model evaluations to find performant models.\\n\\n15. Figure 4a. Consider reformatting data (maybe histogram of differences? Or scatter plot). Current representation is difficult to read / parse.\", \"typos\": \"page 2, \\\"objective term. on GANs, the AutoLoss: Capital o is needed.\", \"page_3\": \"Parameter Learning heading the period is not bolded.\\n\\n[1] Learning step size controllers for robust neural network training. Christian Daniel et. al.\\n[2]http://torch.ch/blog/2015/11/13/gan.html\\n[3] Understanding Short-Horizon Bias in Stochastic Meta-Optimization, Wu et.al.\\n\\nGiven the positives, and in-spite of the negatives, I would recommend to accept this paper as it discusses an interesting and novel approach when controlling multiple loss values.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"official review\", \"review\": \"The authors proposed an AutoLoss controller that can learn to take actions of updating different parameters and using different loss functions.\\n\\nPros\\n1. Propose a unified framework for different loss objectives and parameters.\\n2. An interesting idea in meta learning for learning loss objectives/schedule.\", \"cons\": \"1. The formulation uses REINFORCE, which is often known with high variance. Are the results averaged across different runs? Can you show the variance? It is hard to understand the results without discussing it. The sample complexity should be also higher than traditional approaches.\\n2. It is hard to understand what the model has learned compared to hand-crafted schedule. Are there any analysis other than the results alone?\\n3. Why do you set S=1 in the experiments? What\\u2019s the importance of S?\\n4. I think it is quite surprising the AutoLoss can resolve mode collapse in GANs. I think more analysis is needed to support this claim. \\n5. The evaluation metric of multi-task MT is quite weird. Normally people report BLEU, whereas the authors use PPL. \\n6. According to https://github.com/pfnet-research/chainer-gan-lib, I think the bested reported DCGAN results is not 6.16 on CIFAR-10 and people still found other tricks such as spectral-norm is needed to prevent mode-collapse.\", \"minor\": \"1. The usage of footnote 2 is incorrect.\\n2. In references, some words should be capitalized properly such as gan->GAN.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice work. Will the work be open source ?\", \"review\": \"This paper addresses a novel variant of AutoML, to automatically learn and generate optimization schedules for iterative alternate optimization problems. The problem is formulated as a RL problem, and comprehensive experiments on four various applications have demonstrated that the optimization schedule produced can guide the task model to achieve better quality of convergence, more sample-efficient, and the trained controller is transferable between datasets and models. Overall, the writing is quite clear, the problem is interesting and important, and the results are promising.\", \"some_suggestions\": \"1. What are the key limitations of AutoLoss ? Did we observe some undesirable behavior of the learned optimization schedule, especially when transfer between different datasets or different models ? More discussions on these questions can be very helpful to further understand the proposed method. \\n\\n2. As the problem is formulated as an RL problem, which is well-known for its difficulty in training, did we encounter similar issues? More details in the implementation can be very helpful for reproducibility. \\n\\n3. Any plan for open source ?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SygK6sA5tX | Graph Classification with Geometric Scattering | [
"Feng Gao",
"Guy Wolf",
"Matthew Hirn"
] | One of the most notable contributions of deep learning is the application of convolutional neural networks (ConvNets) to structured signal classification, and in particular image classification. Beyond their impressive performances in supervised learning, the structure of such networks inspired the development of deep filter banks referred to as scattering transforms. These transforms apply a cascade of wavelet transforms and complex modulus operators to extract features that are invariant to group operations and stable to deformations. Furthermore, ConvNets inspired recent advances in geometric deep learning, which aim to generalize these networks to graph data by applying notions from graph signal processing to learn deep graph filter cascades. We further advance these lines of research by proposing a geometric scattering transform using graph wavelets defined in terms of random walks on the graph. We demonstrate the utility of features extracted with this designed deep filter bank in graph classification of biochemistry and social network data (incl. state of the art results in the latter case), and in data exploration, where they enable inference of EC exchange preferences in enzyme evolution. | [
"geometric deep learning",
"graph neural network",
"graph classification",
"scattering"
] | https://openreview.net/pdf?id=SygK6sA5tX | https://openreview.net/forum?id=SygK6sA5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syly99tGlN",
"ryxeDc7zy4",
"H1eadygcRQ",
"B1gnX1xqC7",
"rJxjKP150m",
"BygaT4Jq0Q",
"SkejcEy50m",
"rJek5Xl9n7",
"rJxzUquw2Q",
"BJxeBoLX3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544882822833,
1543809624210,
1543270260821,
1543270179945,
1543268227285,
1543267524535,
1543267474533,
1541174151515,
1541012042134,
1540741944037
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper820/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper820/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper820/Authors"
],
[
"ICLR.cc/2019/Conference/Paper820/Authors"
],
[
"ICLR.cc/2019/Conference/Paper820/Authors"
],
[
"ICLR.cc/2019/Conference/Paper820/Authors"
],
[
"ICLR.cc/2019/Conference/Paper820/Authors"
],
[
"ICLR.cc/2019/Conference/Paper820/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper820/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper820/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"AR1 is concerned about the overlap of this paper with Gama et al., 2018 as well as lack of theoretical analysis and poor results on REDDIT-5k and REDDIT-5B datasets. AR2 reflects the same concerns (lack of clear cut novelty over Zou & Lerman, 2018, Game, 2018. AR3 also points the same issue re. lack of theoretical results. The austhors admit that Zou and Lerman, 2018, and Gama, 2018, focus on stability results while this submission offers empirical evaluations.\\n\\nUnfortunately, reviewers did not find these arguments convincing. Thus, at this point, the paper cannot be accepted for publication in ICLR. AC strongly encourages authors to develop their theoretical 'edge' over this crowded market of GCNs and scattering approaches.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lack of theoretical novelty voiced as one of main issues.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for the extensive changes of the paper! I appreciate the work that you did, but I am still not convinced about the novelty of the approach as well as its practical benefits. The 'edge' over existing methods does not appear to be sufficiently large to me.\\n\\nWith the new data sets, the difference over existing work is not so large (as far as I understand, it is within the standard error that you calculate); for the REDDIT-5k data set, a previous approach that is geared towards using topological information (https://papers.nips.cc/paper/6761-deep-learning-with-topological-signatures.pdf) outperforms your proposed method. For the REDDIT-B data set, the WL-OA kernel, as you correctly cite it, is on a par with your approach considering its smaller standard deviation.\\n\\nI think this paper would benefit from a large-scale investigation to _really_ drive the point home about the benefits of the methods.\"}",
"{\"title\": \"Part II of our response\", \"comment\": \"Regarding experimental setup/results: we note that in the revised version we added classification with RBF kernel SVM, which outperforms all other methods on two individual datasets (REDDIT B and REDDIT 5K), in addition to outperforming all other methods on average on social network data. This also reduces the dependence of the presentation on individual classifiers, and indicates the scattering feature space is sufficiently rich to enable state of the art classification even without the aggregation of classification accuracy across datasets, which as we explain in the paper, is intended to assess the universality of our features.\\n\\nRegarding the stability of the transform vis a vis the parameters choices, we thought this was a good idea, but amended it slightly. We instead looked at the stability of the classification results in terms of the intrinsic dimension of the scattering coefficients, i.e., if we compute the PCA projection of the scattering coefficients and only keep a number of dimensions that is necessary to capture 90% of the variance of the scattering coefficients, how does this affect the classification performance? We see that, first, according to this measure of dimensionality, the geometric scattering coefficients give intrinsically low dimensional descriptions of the data. It is likely that other graph CNNs do the same, but we again emphasize that the graph convolution filters in these networks are task dependent, whereas the geometric scattering wavelet filters are task independent and more suitable for exploratory data analysis. Furthermore, using only the PCA projections (capturing 90% of the variance of the geometric scattering coefficients), we carried out an SVM classification and found the classification rate does not drastically decrease relative to an SVM classification on the full set of geometric scattering coefficients. In particular, the geometric scattering coefficients aggregate useful information into the primary dimensions of variability, and indeed these dimensions describe the majority of class differences.\\n\\nTo further establish the utility of geometric scattering as a unique feature extraction method beyond graph classification, we added new experiments that look into data exploration with it, focusing on the ENZYMES dataset. Here, we indeed show that scattering features enable inference of enzyme commission class exchanges in the evolution of enzymes, which emerge in an unsupervised manner from our scattering feature space, and are validate with established knowledge from [Cuesta et al., 2015]. We note that the revised version of the paper also contains improvements in the classification performances with the addition of SVM based classification, which also serves to demonstrate the independence of the geometric scattering transform itself from specific classifiers, as we now show it with three classifiers (SVM, fully-connected layers, or logistic regression).\\n\\nRegarding the other (more minor) comments: we did our best to address them as part of this revision. \\n\\nWe hope that with these additions we were able to sufficiently improve the quality of the presented results to warrant an update to the reviewer's score.\"}",
"{\"title\": \"Thank you for the detailed and helpful comments! Let us try to respond point by point.\", \"comment\": \"Thank you for the detailed and helpful comments! Let us try to respond point by point here and in the subsequent (part II) post.\\n\\nRegarding the differences and similarities with [Gama, et al, 2018], we agree that the construction presented here is similar to [Gama, et al, 2018], and as the reviewer suggested, we added a new subsection (in section 3) to discuss the relation between our work and that one, since we consider them complementary to each other. We point out that while [Gama, et al, 2018] prove some very nice theoretical properties of their diffusion scattering transform, their numerical experiments do not give much indication as to whether this theory is practically relevant to graph learning tasks (supervised or unsupervised). In this paper we have shown that, indeed, they potentially are - especially with new results that we now added, both for graph classification (using SVM over scattering features - achieving state of the art results on REDDIT B and REDDIT 5K, and in aggregate over social networks) and for data exploration, as we describe below. Furthermore, both [Zou and Lerman, 2018] and [Gama, et al, 2018] focus primarily on stability results, i.e., if two graphs are similar (e.g., of the same class), will the resulting scattering coefficients also be similar? Under certain theoretical frameworks, they prove positive statements along these lines. Some of the new experiments we have added in (see below), though, aim at shedding light on the converse to this question, namely: if two graphs are dissimilar (e.g., in different classes), are the geometric scattering coefficients sufficiently different to separate the classes? While we don\\u2019t prove any theoretical results along these lines, new numerical experiments that we added in the revised version of the paper are sufficiently positive that they open up this question for further numerical and theoretical work. Therefore, as we mention above, we view this paper as complementary to the work of [Gama, et al, 2018] (as well as [Zou and Lerman, 2018]), as it both fills in missing numerical validation and potentially opens up a new path for theoretical investigation. We also note that even in the classical case, the stability of the scattering transform is established with rigorous mathematical analysis, while the capacity of scattering features to obtain rich representations of signals is established by applications and numerical experiments. We follow a similar approach here.\\n\\nFor the capsule graph neural networks of [Verma and Zhang, 2018], we would argue that the capsule part is not the primary theoretical addition; indeed, capsule in the context presented in our paper and in [Verma and Zhang, 2018] is just another name for statistical moments (and in fact in the updated version of the paper, we are switching to this terminology, while still linking to [Verma and Zhang, 2018]), which people have been computing and using in various contexts for a very long time. It is therefore the features themselves that constitute the most important difference, and here indeed there are several differences. Specifically, [Verma and Zhang, 2018] like other graph CNN methods, learn the graph convolutional filters in a supervised, task driven fashion. Furthermore, they use filters with a fixed scale, which in fact they acknowledge as a shortcoming, but since they are learned they refrain from training larger (multiscale) filters. Rather, they define global input features to their graph convolutional neural network. On the other hand, the geometric scattering transform uses wavelet graph filters that adapt to the graph structure, but not the classification task, and they are multiscale.\", \"regarding_the_computational_complexity\": \"the transform consists of a sequence of standard matrix multiplications, absolute value operators, and summations. We point out that since the graph wavelet filters are not learned, the training time is completely determined the time needed to train the fully connected layers or the SVM, which is not much and we believe is well understood independently of the proposed method here. In particular this means that unlike other geometric deep learning methods, the graph convolution matrix multiplications do not need to be repeated numerous times in the training process, but rather are a one time computational cost that is carried out before training. Further, in the added experiments (Sec. 4.2), we also indicate that the scattering features can in fact be used for dimensionality reduction by embedding graph data into a low dimensional Euclidean space without losing much in terms of classification accuracy compared to the full scattering feature space.\\n\\n--- continued in the following post ---\"}",
"{\"title\": \"Thank you for the detailed and helpful comments! Let us try to respond point by point.\", \"comment\": \"Thank you for the detailed and helpful comments! Let us try to respond point by point.\\n\\nRegarding the lack of theoretical results, as we mention in the paper, we focus here on practical and applicable aspects of geometric scattering as an extension and generalization of the Euclidean scattering transform. In particular, we aim to establish the capacity of our proposed scattering features to provide a rich representation of graph data. We note that even in the classical Euclidean case, theoretical results are mostly available only for stability properties (e.g., Lipshitz stability to groups of deformations), while the capacity of the scattering transform to capture rich representations of signals is established via a variety of applications and numerical experiments. Therefore, we follow the same approach here. To better clarify our focus, we added Section 3.2 that discusses these stability and capacity aspects, while positioning our work as complementary to [Zou and Lerman, 2018] and [Gama, et al, 2018] that focused on theoretical results (indeed, both prove very nice stability results, which we expect to also reflect on our construction), but lack thorough numerical experiments. Therefore, we regard a strong part of our contribution as complementary to them, as here we both fill in missing numerical validation and potentially open up a new path for theoretical investigation into the capacity of geometric (or graph) scattering, especially with new results added in the revised version of the paper demonstrating the application of geometric scatting to data exploration. \\n\\nRegarding parameter choices, the scattering transform essentially has two configurable parameters: the maximum scale J, which is chosen automatically here based on the diameter of the graphs, and the number of moments Q, which was chosen based on cross validation tuning, per standard practices in tuning hyperparameters in supervised learning tasks. We note that other hyperparameters that relate to the neural network classifier were also chosen in a similar manner, but are not the focus of this paper. Further, in the revised version we added classification with RBF kernel SVM, which achieves state of the art results on social network data (in particular, REDDIT B and REDDIT 5K, but also in aggregate). Therefore, our revised results reduces the dependence of presented results on hyperparameter tuning, as it does not rely solely on multiple fully connected layers (with associated tuning challenges) to establish the performance of our method on graph classification.\\n\\nRegarding training methodology, we believe there may be a misunderstanding here. As stated and explained in Appendix E (in the revised version), we use standard 10-fold cross validation for classification experiments, following the standard practice in other works on graph classification, which allows for reliable comparison of our results to them. As part of this procedure, each fold in fact trains the classifier on 90% (not 10%) of the data, and tests on the remaining 10%.\", \"regarding_the_p2_comment\": \"we agree that some correlations may theoretically be lost by not mixing features prior to the application of geometric scattering, and perhaps this might be an interesting idea for future work (e.g., in conjunction with random projections or other mixing strategies). However, here we focus on exploring the properties of the scattering transform itself, with emphasis on its capacity to provide rich representations of graphs, even without special preprocessing such as mixing features. Indeed, with the addition of new results, both in classification (e.g., SVM on scattering features obtaining state of the art results on social networks data) and data exploration (revealing relations between enzyme classes), we believe we provide sufficient indication to the viability of geometric scattering even without early feature mixing.\", \"regarding_the_p4_comments\": \"(1) We revised the caption of Fig. 1 to explain we indeed apply the filter matrices to Diracs on the graph. (2) We agree with the terminology suggestion and revised to use \\u201cmoments\\u201d rather than capsule. (3) When we mention normalized moments, we mean in the standard statistical sense (also referred to sometimes as \\u201cstandardized moments\\u201d), which are also detailed immediately after (mean, variance, skew, and kurtosis). We clarified in the revision that these are \\u201c normalized (i.e., standardized) moments \\u201c\"}",
"{\"title\": \"Part II of our response - now addressing (a)\", \"comment\": \"For (a), we agree that the construction presented here is similar to those of [Zou and Lerman, 2018] and [Gama et al, 2018], particularly the latter. We point out that while [Zou and Lerman, 2018] and [Gama et al, 2018] prove some very nice theoretical properties of their versions of graph scattering transforms, their numerical experiments do not give much indication as to whether this theory is relevant to practical graph learning tasks. In this paper we have shown that, indeed, they potentially are. Furthermore, both [Zou and Lerman, 2018] and [Game, et al, 2018] focus primarily on stability results, i.e., if two graphs are similar (e.g., of the same class), will the resulting scattering coefficients also be similar? Under certain theoretical frameworks, they prove positive statements along these lines. Some of the new experiments we have added in, though, aim at shedding light on the converse to this question, namely: if two graphs are dissimilar (e.g., in different classes), are the geometric scattering coefficients sufficiently different to separate the classes? While we don\\u2019t prove any theoretical results along these lines, the new numerical experiments added into the revised version of the paper are sufficiently positive that they open up this question for further numerical and theoretical work. A discussion along these lines was added as the new Section 3.2 to differentiate these stability vs capacity properties and distinguish our focus from the one in these two related works.\\n\\nWe remark that even in the classical case, theoretical results on scattering transforms are mostly available only for stability properties, while the capacity of the scattering transform to capture rich representations of signals is established via a variety of applications and numerical experiments. We would thus view this paper as complementary to the works of [Zou and Lerman, 2018] and [Gama, et al, 2018], as it both fills in missing numerical validation and potentially opens up a new path for theoretical investigation.\\n\\nWe hope that with these additions we were able to sufficiently improve the quality of the presented results to warrant an update to the reviewers score.\"}",
"{\"title\": \"Thank you for the helpful comments! Let us respond first to (b) and then (a).\", \"comment\": \"Thank you for the helpful comments! Let us respond first to (b) here, and then to (a) in a subsequent post.\\n\\nFor (b), we have augmented the numerical experiments in the submitted version of the paper with the following additional experiments. \\n\\nIn the revised Section 4.1, rather than using fully connected layers at the backend, we used an SVM classifier with RBF kernel (hyperparameters chosen via cross validation), which improved some of the classification results a small, but potentially significant, amount. In particular, the geometric scattering followed by SVM achieved state of the art results on two data sets, REDDIT B and REDDIT 5K, compared to the other methods presented in the paper. Further, we now also achieve state of the art results in aggregate on social network data.\\n\\nWe have also investigated geometric scattering in the context of data exploration of biochemistry data, added in the new Section 4.2 :\\n\\n- First, we examined the intrinsic dimension of the geometric scattering coefficients by using a simple PCA explained variance test, i.e., by counting the number of principal components needed to capture 90% of the variance. We see that, according to this measure of dimensionality, the geometric scattering coefficients give intrinsically low dimensional descriptions of the data. It is likely that other graph CNNs do the same, but we emphasize that the graph convolution filters in these networks are task dependent, whereas the geometric scattering wavelet filters are task independent and more suitable for data exploration.\\n\\n- Next, using only the PCA projections from the previous bullet (capturing 90% of the variance of the geometric scattering coefficients), we carried out the SVM classification and found the classification rate does not drastically decrease. In particular, the geometric scattering coefficients aggregate useful information into the primary dimensions of variability, and indeed these dimensions describe the majority of class differences.\\n\\n- We have added new experiment indicating the degree to which to the geometric scattering coefficients separate classes within the datasets, which we hypothesize are the result of systematic structural differences in the graphs, and captures relations between them. We focused here on the ENZYMES dataset, where we show that this hypothesis is at least partially validated, and the geometric scattering coefficients are able to reasonably separate the classes into lower dimensional subspaces of the full feature space, even without any training via supervised learning. \\n\\n- Moreover, we show that the relations between subspaces of enzyme classes (labeled by enzyme commission numbers) here can in fact be used to reveal exchange preferences between classes during enzyme evolution, which we validate by comparing to established knowledge from [Cuesta et al., 2015].\\n\\nWe believe these new results provide a significant improvement, and indicate the added value and contribution of the proposed geometric scattering in graph data analysis. We also remark that the construction itself is not that complicated (at least compared to other geometric deep learning methods). The algorithm itself is merely a graph convolution with multiscale graph wavelet filters, followed by an absolute value nonlinearity, which can be repeated. We emphasize there is no training of these filters, so the training time is completely determined the time needed to train the fully connected layers or the SVM, which is not much. It additionally eliminates the need to pay considerable attention to the computational complexity of the graph convolution, since it does not need to be repeated numerous times in the training process, but rather is a one time computational cost. Of course for very large graphs, though, certain considerations should still be taken into account, but this is true for other methods as well.\"}",
"{\"title\": \"Interesting construction but limited novelty\", \"review\": \"The authors propose an advance in geometric deep learning based on a geometric scattering transform using graph wavelets defined in terms of ran- dom walks on the graph. The paper is well written, easy to understand also for a not-so-tech audience but nevertheless precise in all the mathematical details.\\nIntro and references are satisfactory, and also the experimental section is sufficiently convincing. However, there are two big issues undermining the overall structure of the manuscript: \\na) the theoretical novelty w.r.t. (Zou & Lerman, 2018) and (Game, 2018) is partial and rather technical, so the originality of the present manuscript is limited\\nb) the improvement w.r.t. to other published method is rather small, so the performance gain is only partially justified by the quite complex theoretical construction.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper and ideas, a bit low on results maybe\", \"review\": \"This paper generalizes scattering transform to graphs. It defines wavelets, scattering coefficients on graph signals. The experimental section describes their use in classification tasks with comparisons with recent methods. It seems scattering performs less well than SOTA methods, but has the advantages of not requiring any training so potentially good candidates for low data regimes application. Interesting and original paper and ideas being developed, but might be a tiny bit weak in term of results, both theoretical and experimental ?\\n\\nThere is not much theoretical results (mostly definition and hints that some of the results from euclidian case might generalize without formal investigation).\\n\\nRegarding the results, in particular table3, given that you use particular hyper parameters J and Q, for each dataset, this is arguably a bit of architectural overfitting ? Results would be more convincing IMO if obtained with a single set of hyper parameters. What was the procedure to come up with those parameters ?\\n\\nRegarding the methodology for training the classifier, I am not familiar with these datasets but using just a 1/10 of the data to train classifier seems a bit extreme ? \\nHow about training each on 90% random subset of training set and averaging ? Or just the whole training subset ? That would still be fine in the sense that none of the classifier would have seen the test set ?\\n\\np2 '~it naturally extends to multiple signals by concatenating their scattering features~'\", \"p4_figure_1\": \"Not very clear what those visualizations are. \\\\Psi_j is supposedly a n x n matrix so, is this \\\\Psi_j applied to a two different Dirac on the graph ? Would be good to clarify exactly what is being plotted in the legend.\\n\\nseems to be the biggest limitations of the proposed approach. By not early mixing of different features one might lose the high frequencies correlations between different signals defined on a single graph.\\n\\nP4. IMO capsule is not such a great name / already used in ML by Hinton's capsule etc... Why not simply 'moments' or 'statistics' ? \\n\\n'We can replace (3) with normalized moments of x ... how exactly do you normalize ?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A good idea, but the delineation to other work needs improvement\", \"review\": [\"# Summary of the paper\", \"Inspired by the success of deep filter banks, this paper presents a designed deep filter bank for graphs that is based on random walks. More precisely, the technique uses lazy random walks, expressed in terms of the graph Laplacian, and re-frames this in terms of graph signal processing. Similarly to wavelets, graph node features are calculated at different scales and subsequently summed in order to remain invariant under permutations. Several experiments on graph data sets demonstrate the performance of the new technique.\", \"# Review\", \"This paper is written very well and explains its method with high clarity. The principal issues I see are as follows:\", \"The originality of the contributions is not clear\", \"Missing theoretical discussion\", \"The experimental setup is terse and slightly confusing\", \"Concerning the originality of the paper, the differences to Gama et al., 'Diffusion Scattering Transforms on Graphs' are not made clear. Cursory reading of this publication shows a large degree of similarity. Both of the papers make use of diffusion geometry, but Gama et al. _also_ define a multi-scale filter bank, similar to Eq. 4 and 5. The paper needs to position itself more clearly vis-\\u00e0-vis this other publication. Is the present approach to be seen more as an application of the theory that was developed in the paper by Gama et al.? What are the key similarities and differences? In terms of space, this could be added to Section 3.2, which could be rephrased as a generic 'Differences to other methods' section and has to be slightly condensed in any case (see my suggestions below). Another publication by Zou & Lerman, 'Graph Convolutional Neural Networks via Scattering', is also cited as an inspiration, but here the differences are larger in my understanding and do not necessitate further justification. Last, the publication 'Graph Capsule Convolutional Neural Networks' by Verma & Zhang is also cited for the definition of 'scattering capsules'. Again, cursory reading of the publication shows that this approach is similar to the presented one; the only difference being which features are used for the definition of capsules. I recommend referring to the invariants as 'capsules' and link it back to Verma & Zhang so that the provenance of the terminology is clear.\", \"Concerning the theoretical part of the paper, I miss a discussion of the complexity of the approach. Such a discussion does not have to be long, but in particular since the paper mentions that the applicability of scattering transforms for transfer learning (and also remarks about the universality of them in Section 4), some space should be devoted to theoretical considerations (memory complexity, runtime complexity). This would strengthen the paper a lot, in particular in light of the complexity of other approaches! Furthermore, an additional experiment about the stability of scattering transforms appears warranted. While I applaud the experimental description in the paper (number of scales, how the maximum scale is chosen, ...), an additional proof or experiment in the appendix should deal with the stability. Let's assume that for extremely large graphs, I am content with 'almost-but-not-quite-as-good' classification performance. Is it possible to achieve this by limiting the number of scales? How much to the results depend on the 'right' choice here?\", \"Concerning the experimental setup, I think that the way (average) accuracies are reported at present is slightly misleading. The paper even remarks about this in footnote 2. While I understand the need of demonstrating the universality of these features, I think that the current setup is not optimal for this. I would recommend (in addition to reporting accuracies) a transfer learning setup rather in which the beneficial properties of the new method can be better explored. More precisely, the claim from Section 4, 4th paragraph ('Since the scattering transform...') needs to be further explored. This appears to be a unique feature of the new method. The current experimental setup does not exploit it. As a side-note, I realize that this might sound like a standard request for 'show more experiments', but I think the paper would be more impactful if it contained one scenario in which its benefits over other approaches are clear.\", \"# Suggestions for improvement\", \"The paper flows extremely well and it is clear that care has been taken to ensure that everything can be understood. I liked the discussion of invariance properties in particular. There are only a few minor things that can be improved:\", \"'covariant' and 'equivariant', while common in (graph) signal processing, could be briefly explained to increase accessibility and impact\", \"'order' and 'layer' are not used consistently: in the caption of Figure 2a, the term 'order' is used, but for Eq. 4 and 5, for example, the term 'layer' is employed. Since 'layer' is more reminiscent of a DNN, I would suggest to use 'order' throughout the paper, because it meshes better with the way the scattering invariants are defined.\", \"the notation $Sx$ is slightly overloaded; in Figure 2a, for example, it is not clear at first that the individual cascades are supposed to form a *set*; this is only explained at the end of Section 3.1; to make matters more consistent, the figure should be updated and the combination of individual cascades should be made clear\", \"In Eq. 5, the bars of the absolute value are not set correctly; the absolute value should cover $\\\\psi_j x(v_i)$ and not $(v_i)$ itself.\", \"minor 'gripe': $\\\\psi^{(J)}$ is defined as a set in Eq. 2, but it is treated as a matrix or an operator (and also referred to as such); this should be more consistent\", \"The discussion of the aggregation of multiple statistics in Section 3.2 appears to be somewhat redundant in light of the discussion for Eq. 4 and Eq. 5 in the preceding section\", \"in the appendix, more details about the training of the FCN should be added; all other parts of the experiments are described in sufficient detail, but the training process requires additional information about learning rates etc.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyMuaiAqY7 | Deli-Fisher GAN: Stable and Efficient Image Generation With Structured Latent Generative Space | [
"Boli Fang",
"Chuck Jia",
"Miao Jiang",
"Dhawal Chaturvedi"
] | Generative Adversarial Networks (GANs) are powerful tools for realistic image generation. However, a major drawback of GANs is that they are especially hard to train, often requiring large amounts of data and long training time. In this paper we propose the Deli-Fisher GAN, a GAN that generates photo-realistic images by enforcing structure on the latent generative space using similar approaches in \cite{deligan}. The structure of the latent space we consider in this paper is modeled as a mixture of Gaussians, whose parameters are learned in the training process. Furthermore, to improve stability and efficiency, we use the Fisher Integral Probability Metric as the divergence measure in our GAN model, instead of the Jensen-Shannon divergence. We show by experiments that the Deli-Fisher GAN performs better than DCGAN, WGAN, and the Fisher GAN as measured by inception score. | [
"Generative Adversarial Networks",
"Structured Latent Space",
"Stable Training"
] | https://openreview.net/pdf?id=HyMuaiAqY7 | https://openreview.net/forum?id=HyMuaiAqY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgUztnNkV",
"ByxlwZX80m",
"Byxo1A_wa7",
"S1xar3L4p7",
"ryesmzVRh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543977229764,
1543020888315,
1542061539170,
1541856325215,
1541452323316
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper819/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper819/Authors"
],
[
"ICLR.cc/2019/Conference/Paper819/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper819/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper819/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper combines two recently proposed ideas for GAN training: Fisher integral probability metrics, and the Deli-GAN. As the reviewers have pointed out, the writing is somewhat haphazard, and it's hard to identify the key contributions, why the proposed method is expected to help, and so on. The experiments are rather minimal: a single experiment comparing Inception scores to previous models on CIFAR; Inception scores are not a great measure, and the experiments don't yield much insight into where the improvement comes from. No author response was given. I don't think this paper is ready for publication in ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"not ready for publication at ICLR\"}",
"{\"title\": \"Acknowledgement for reviewers and possible future plans\", \"comment\": \"Thank you so much for your pertinent reviews and suggestions. We do realize the problems in writing(time is a bit rushed before submission) and will make effort to improve on experiments for submission to future venues. We will also try to modify the structure of the paper and add comparisons to the existing models.\"}",
"{\"title\": \"Trivial extension of previous paper; weak experiments.\", \"review\": \"The paper proposes to combine two ideas from previous publications, Fisher-GAN and Deli-GAN, i.e., use a mixture noise (Deli) with Fisher IPM metric for training GAN.\\n\\nThe extension of the previous work is trivial and the combination of the two ideas lack of any motivation. The experimental results are also weak. It is certainly below the bar of acceptance.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Straightforward combination of prior works with no theory and weak experiments\", \"review\": \"This paper is a straightforward combination of two previous works, Deli-GAN and Fisher GAN. Deli-GAN has a mixture prior distribution in the latent space, while Fisher GAN uses Fisher IPM instead of JSD as objective.\\nInception score on CIFAR-10 is used to empirically measure quality.\", \"cons\": \"The exposition of the ideas is lacking. What's wrong with Deli-GAN? What is this paper trying to accomplish by incorporating fisher metric?\\nNo theoretical justification while empirical results are sparse and unconvincing.\\nWriting quality could be improved throughout the paper in terms of both structure and language. \\n\\nIn summary, this paper is not of the quality that should be accepted by ICLR.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper presents a small modification to a previous GAN framework, and does not seem to be complete\", \"review\": [\"Summary\", \"This paper presents a minor improvement over the previous Deli-GAN framework (Gurumurthy et al CVPR'17). Specifically, the work proposes to use the Fisher Integral Probability Metric as the divergence measure in our GAN model, instead of the Jensen-Shannon divergence. It shows little results (seems to be positive) using this new distance measure other than traditional ones. Except that, I didn't see any other contribution from this paper.\", \"Suggestions\", \"The paper is poorly written and seems to be a rush submission to ICLR. For example:\", \"a lot of grammatical errors throughout the paper\", \"only 6.5 papes out of 8 pages are utilized\", \"the introduction is not convincing -- what problem are you going to address? any summary of your methodology? why it is expected to outperform existing frameworks? what distinguishes your work from existing works? and what're your main results? I cannot conclude after reading the intro.\", \"the results are very minor and not convincing. It seems the authors conducted a very limited set of experiments and concluded that the proposed Deli-Fisher GAN is better. If you claim that the proposed framework can generate better images, at least the framework should be compared to the latest state-of-the-art GANs (e.g. spectral GANs, etc.)\", \"The writing is not polished.\", \"Overall, the paper is far from ready to be submitted to ICLR, not mentioning acceptance. I would recommend the authors to conduct more experiments and comparisons and do a better job before submitting it to future conferences.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Hyxu6oAqYX | An Energy-Based Framework for Arbitrary Label Noise Correction | [
"Jaspreet Sahota",
"Divya Shanmugam",
"Janahan Ramanan",
"Sepehr Eghbali",
"Marcus Brubaker"
] | We propose an energy-based framework for correcting mislabelled training examples in the context of binary classification. While existing work addresses random and class-dependent label noise, we focus on feature dependent label noise, which is ubiquitous in real-world data and difficult to model. Two elements distinguish our approach from others: 1) instead of relying on the original feature space, we employ an autoencoder to learn a discriminative representation and 2) we introduce an energy-based formalism for the label correction problem. We prove that a discriminative representation can be learned by training a generative model using a loss function comprised of the difference of energies corresponding to each class. The learned energy value for each training instance is compared to the original training labels and contradictions between energy assignment and training label are used to correct labels. We validate our method across eight datasets, spanning synthetic and realistic settings, and demonstrate the technique's state-of-the-art label correction performance. Furthermore, we derive analytical expressions to show the effect of label noise on the gradients of empirical risk. | [
"label noise",
"feature dependent noise",
"label correction",
"unsupervised machine learning",
"semi-supervised machine learning"
] | https://openreview.net/pdf?id=Hyxu6oAqYX | https://openreview.net/forum?id=Hyxu6oAqYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJlYUPF-gV",
"Hyg1sDgfR7",
"BklviCdgRX",
"SkllgyZWaX",
"S1eT2iLan7",
"Bkegv0DOh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544816464684,
1542748054558,
1542651550876,
1541635815714,
1541397429230,
1541074520185
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper818/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper818/Authors"
],
[
"ICLR.cc/2019/Conference/Paper818/Authors"
],
[
"ICLR.cc/2019/Conference/Paper818/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper818/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper818/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors present an algorithm for label noise correction when the label error is a function of the input features.\\n\\nStrengths\\n- Well motivated problem and a well written paper.\\n\\nWeaknesses\\n- The reviewers raised concerns about theoretical guarantees on generalization; it is not clear why energy based auto-encoder / contrastive divergence would be a good measure of label accuracy especially when the feature distribution has high variance, and when there are not enough clean examples to model this distribution correctly.\\n- Evaluations are all on toy-like tasks with small training sets, which makes it harder to gauge how well the techniques work for real-world tasks.\\n- It\\u2019s not clear how well the algorithm can be extended to multi-class problems. The authors suggested 1-vs-all, but have no experiments or results to support the claim.\\n\\nThe authors tried to address some of the concerns raised by the reviewers in the rebuttal, e.g., how to address unavailability of correctly labeled data to train an auto-encoder. But other concerns remain. Therefore, the recommendation is to reject the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"not enough theoretical guarantees, evaluations are insufficient\"}",
"{\"title\": \"Authors' response\", \"comment\": \"The authors thank the reviewer for recognizing that the problem under consideration is well motivated and that the paper is well-written and presents good experimental results.\", \"reviewer\": \"\\u201cIt would be good to vary the noise parameters and show how robust the proposed approach is in dealing with different levels of noise.\\u201d\\n\\nOur experimentation with different noise parameters may be occluded by our presentation - we experiment with different feature-dependent noise models. In each of the tables, the \\u2018col\\u2019 variable represents the feature the label noise is dependent on. Table 1 represents experiments under the linear feature dependent noise model. Table 2 represents experiments under the quadratic feature dependent noise model. We vary the noise parameters as indicated on the second column of these tables. The different noise models and parameters tested in these experiments merit further explanation. \\n\\nThe authors agree with the reviewer that this would be an interesting experiment, however, due to space restrictions, the authors can use the existing experiments to comment on the general expected trend resulting from increasing noise. This will be added to the next version of the paper.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"The authors thank the reviewer for recognizing that the presented results are encouraging and that the paper summarizes label noise in an acceptable way. We also agree with the reviewer that appropriate assumptions are made in setting up the problem of label noise correction.\\n\\nTo address some of the authors remarks, we would like to highlight that the existing method can use a small dataset that has clean labels to facilitate the label correction process. However, as shown in tables 1, 2 and 3 (see column AE learned), one does not need to start with clean labels. Using the method of Ding et al. (2018), clean labels can be inferred on a subset of the data before bias correction step is applied.\", \"reviewer\": \"\\\"The experiments are on toy/small-scale datasets with controlled label noise (but the way to control the noise is not clear). To show the effectiveness of the proposed methods, experiments need to be done on larger-scale datasets with truly realistic unknown noise, Establishing state-of-the-art classification accuracy using a large-scale dataset with noisy labels can serve as strong support for this paper.\\\"\\n\\nTowards this end, we include the Arrhythmia dataset results (Table 1, last row) and the MNIST dataset with label noise as studied by Ren et al. (2018). The Arrhythmia dataset consists of 50000 samples and the problem posed - of algorithmically assigned labels - remains relevant to real world tasks such as web annotation. This dataset contains real label noise as discussed in the paper. Next, we compare our method with another state-of-the-art noise robust algorithm (Ren et al. (2018)) on a relatively large dataset: MNIST.\"}",
"{\"title\": \"Need more theoretical guarantee\", \"review\": \"This paper addresses Type III label noise correction problem in which the labeling noise depends on the features. They assume that we can obtain a small amount of cleanly labeled data, and use an energy-based semi-supervised learning approach to bootstrap the relabeling process.\", \"pros\": [\"Problem is well-motivated with a reasonably good overview of this research area.\", \"Paper is generally well-written with enough details to follow and good experimental result discussion.\"], \"cons\": [\"The energy-based approach based on contrastive divergence is pretty straightforwardly defined, but it will make the paper much stronger if the authors can have more analysis on this approach and/or provide theoretical guarantee on generalization.\", \"It is not obvious to me how to extend the proposed approach to multi-class problems.\", \"It will be beneficial to test the approach on more real-world problems on top of the toy-data-alike binary classification problems.\"], \"minor_clarification_questions\": [\"What amount of cleanly labeled data is sufficiently required for the proposed approach to work? The authors have some pre-selected percentage in experiments but it is non-trivial to establish that for different applications.\", \"Related to the previous comment, how much clean data were used in AE (known) columns in all experiments?\", \"In Fig 2 between the two subgraphs, why is the left one showing positive thetas while the right one showing negative thetas?\", \"Were the hyperparameters a & b chosen from cross-validation or from std of E terms in all experiment results?\", \"In Table 1, for Breast Cancer dataset, how can AE (known) be better than the upper-bound LR-C?\", \"It would be good to vary the noise parameters and show how robust the proposed approach is in dealing with different levels of noise.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Need improvements\", \"review\": \"Need improvements\\n\\n[Summary]\\n\\nThis paper addresses the problem of correcting noisy labels for binary classification. It assumes the exists of fully clean data, trains an energy-based autoencoder using contrastive learning objective, and use the estimated energy to determine if a training label is corrupted or not. \\n\\n[Pros]\\n\\n1.\\tThe paper summarizes different types of label noise in a sensible way. And, it is reasonable to bootstrap the learning process with a small fully clean dataset.\\n2.\\tThe proposed method shows encouraging results under controlled noise.\\n\\n[Cons]\\n\\n1.\\tIt is not well-motivated why a contrastive objective or an energy-based autoencoder can be a good solution for label correction. In particular, the connections are not established between the discriminative feature learned by an energy-based model and the label correctness. The proposed method looks more like a binary classifier with a better-regularized structure, but still, it is unclear why an energy-based autoencoder is a good choice. \\n2.\\tThe proposed method is limited to binary classification, and there is no obvious way to extend it to multiple classes. \\n3.\\tThe experiments are on toy/small-scale datasets with controlled label noise (but the way to control the noise is not clear). To show the effectiveness of the proposed methods, experiments need to be done on larger-scale datasets with truly realistic unknown noise, Establishing state-of-the-art classification accuracy using a large-scale dataset with noisy labels can serve as strong support for this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good intuition with weak theoretical supports\", \"review\": \"This submission proposes an energy-based method to correct mislabelled examples. Intuitively, the authors claim that contradictions between energy and noisy labels can be used to identify label noise. To make the idea reliable, the authors propose to compute the energy by using learned (commonly shared) features. Experiment results look good. The presentation is also clear. My concerns are as follows:\\n\\n(1) By learning discriminative features and then correcting the label noise, the authors have implicitly assumed that the label noise strongly dependent on the discriminative features. This assumption may be strong as most labels are provided according to the original instance (features). In the experiment section, it is very unclear about how the noise is generated, e.g., how to select x_i according to variance? How to set the threshold? What is \\\"col\\\" in Tables 1 and 2? What are the overall label noise rates? Note that if the threshold for the variance is large. It means that the noise is only added to the discriminative features, making the experiment too ad-hoc. \\n\\n(2) The theory of why the residual energy can be used to identify label noise is elusive. How to set the threshold for identifying label noise with a theoretical guarantee is also unclear. Two recent papers on learning with instance-dependent label noise are surprisingly missed, e.g., Menon, Aditya Krishna, Brendan Van Rooyen, and Nagarajan Natarajan. \\\"Learning from binary labels with instance-dependent corruption.\\\" arXiv preprint arXiv:1605.00751 (2016). and Cheng, Jiacheng, et al. \\\"Learning with Bounded Instance-and Label-dependent Label Noise.\\\" arXiv preprint arXiv:1709.03768 (2017). The latter one proposes algorithms to identify label noise with theoretical guarantees. The authors should compare the proposed method with them.\\n\\n(3) There are methods provided for choose the values of hyperparameters. Most of them are empirically set, which is not convincing.\\n\\n(4) The authors reported that with discriminative features learned by employing noisy data, the proposed method also provide good performance. It would be interesting to see how corrected labels will recursively help better learn the discriminative features. Illustrating figures are preferred.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rJed6j0cKX | Analyzing Inverse Problems with Invertible Neural Networks | [
"Lynton Ardizzone",
"Jakob Kruse",
"Carsten Rother",
"Ullrich Köthe"
] | For many applications, in particular in natural science, the task is to
determine hidden system parameters from a set of measurements. Often,
the forward process from parameter- to measurement-space is well-defined,
whereas the inverse problem is ambiguous: multiple parameter sets can
result in the same measurement. To fully characterize this ambiguity, the full
posterior parameter distribution, conditioned on an observed measurement,
has to be determined. We argue that a particular class of neural networks
is well suited for this task – so-called Invertible Neural Networks (INNs).
Unlike classical neural networks, which attempt to solve the ambiguous
inverse problem directly, INNs focus on learning the forward process, using
additional latent output variables to capture the information otherwise
lost. Due to invertibility, a model of the corresponding inverse process is
learned implicitly. Given a specific measurement and the distribution of
the latent variables, the inverse pass of the INN provides the full posterior
over parameter space. We prove theoretically and verify experimentally, on
artificial data and real-world problems from medicine and astrophysics, that
INNs are a powerful analysis tool to find multi-modalities in parameter space,
uncover parameter correlations, and identify unrecoverable parameters. | [
"Inverse problems",
"Neural Networks",
"Uncertainty",
"Invertible Neural Networks"
] | https://openreview.net/pdf?id=rJed6j0cKX | https://openreview.net/forum?id=rJed6j0cKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyeJoLVexN",
"HylH7ZIu0X",
"Byg4H46S0Q",
"H1l9qhhSAm",
"Skx7PaEXCX",
"r1xEx_DbA7",
"HkehGGd36Q",
"H1gn4WuhTX",
"H1ltbW_naQ",
"H1gIl-_n6m",
"H1gxtiJq6m",
"ryeDzcZ867",
"SylKBy6Eam",
"rygRUCn4am",
"HJxI8nnV6m",
"SkgZ2oh4aX",
"HJgl48AlTX",
"rJeG6QIc3X",
"rJxS7urch7",
"BJgZjv4c37",
"Hkx3D2TaoX",
"H1xOO9K9sX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544730263010,
1543164189322,
1542997052274,
1542995089974,
1542831450521,
1542711276169,
1542386196241,
1542385971927,
1542385921344,
1542385902409,
1542220663951,
1541966350647,
1541881664603,
1541881430184,
1541880910379,
1541880744898,
1541625383880,
1541198777658,
1541195804865,
1541191577211,
1540377700448,
1540164207554
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper817/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer1"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper817/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper817/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a framework for using invertible neural networks to study inverse problems, e.g., recover hidden states or parameters of a system from measurements. This is an important and well-motivated topic, and the solution proposed is novel although somewhat incremental. The paper is generally well written. Some theoretical analysis is provided, giving conditions under which the proposed approach recovers the true posterior. Empirically, the approach is tested on synthetic data and real world problems from medicine and astronomy, where it is shown to compared favorably to ABC and conditional VAEs. Adding additional baselines (Bayesian MCMC and Stein methods) would be good. There are some potential issues regarding MMD scalability to high dimensional spaces, but overall the paper makes a solid contribution and all the reviewers agree it should be accepted for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper\"}",
"{\"title\": \"Update\", \"comment\": \"I have raised my rating by one point due to the additional experiments and increased clarity of the revised manuscript.\"}",
"{\"title\": \"Thanks for the interesting discussion.\", \"comment\": \"This clarifies most of my points.\"}",
"{\"title\": \"Advantages of INNs\", \"comment\": \"> \\u201cit seems you suggest there is no inherent advantage of your method compared to related approaches\\u201d\\n\\nOur previous comment, \\u201cDifferences [between INNs and unrestricted architectures] are subtle\\u201d, does not refer to INNs applied to inverse problems, but only to the differences in expressive power between these architectures in general.\\nWe were hereby explicitly addressing your comment \\u201cI was, in fact, referring to practical limitations, related to insufficient expressiveness of the model\\u201d.\\n\\n> \\u201cCan you please either confirm or explain in what sense the proposed INNs have a fundamental theoretical advantage over competing conditional generative models with respect to learning high quality (i.e. asymptotically correct) posteriors?\\u201d\", \"we_see_the_following_fundamental_advantages_of_our_inn_based_method\": [\"One can learn the forward process and get the inverse for free (in contrast to e.g. cGAN).\", \"Posteriors are not restricted to a particular parametric form (in contrast to classical variational methods).\", \"Posteriors can be efficiently computed (in contrast to e.g. ABC).\", \"Training converges to the true solution (in contrast to dropout inference).\", \"One can efficiently calculate the Jacobian of the mapping (which we do not currently take advantage of).\", \"To the best of our knowledge, there are no established approaches with the same properties, beyond the ones discussed in the paper, where INNs are superior.\", \"Your question whether there are alternative ways to achieve the same goal, and which method works best, is very interesting and will be the focus of another paper after publication of our present results. Our discussions with you helped us identify promising candidates for such comparisons, but we do not consider these alternatives as established methods for our problem setting, so that confident conclusions cannot yet be drawn. We as a community are just starting to learn how to make best use of INNs, and their trade-offs relative to traditional networks need to be investigated further. Overall, all experiments performed to date were highly encouraging.\"]}",
"{\"title\": \"Question\", \"comment\": \"Thank you very much for adding the IAF baseline.\\n\\nI have another question about the scope of the method. Can you elaborate on the following and related statements, it seems you suggest there is no inherent advantage of your method compared to related approaches:\\n\\n\\\"Contrary to intuitive expectations, we (and others) found, that the expressive power of INNs relative to unconstrained networks of comparable size, is not substantially reduced. Differences are subtle, and looking at single experiments in isolation may be misleading. ...\\\"\\n\\nCan you please either confirm or explain in what sense the proposed INNs have a fundamental theoretical advantage over competing conditional generative models with respect to learning high quality (i.e. asymptotically correct) posteriors?\"}",
"{\"comment\": \"Great thanks\", \"title\": \"Thanks\"}",
"{\"title\": \"Revised Version\", \"comment\": \"We have uploaded a revised version of the paper, and added your paper to the related work section.\\nThank you for the suggestion.\"}",
"{\"title\": \"Revised Version\", \"comment\": \"We have uploaded a revised version of the paper, thank you again for your suggestions.\\nThe changes and additions are highlighted in red font for convenience.\\nPlease also note that by adding these changes, our page count increased by half a page beyond the recommended 8 pages.\\nIf this presents a problem, we can attempt shorten the paper accordingly.\"}",
"{\"title\": \"Revised Version\", \"comment\": \"We have uploaded a revised version of the paper, thank you again for your suggestions.\\nThe changes and additions are highlighted in red font for convenience.\\nPlease also note that by adding these changes, our page count increased by half a page beyond the recommended 8 pages.\\nIf this presents a problem, we can attempt shorten the paper accordingly.\"}",
"{\"title\": \"Revised Version\", \"comment\": \"We have uploaded a revised version of the paper, thank you again for your suggestions.\\nThe changes and additions are highlighted in red font for convenience.\\nPlease also note that by adding these changes, our page count increased by half a page beyond the recommended 8 pages.\\nIf this presents a problem, we can attempt shorten the paper accordingly.\"}",
"{\"title\": \"IAF Baseline\", \"comment\": \"As you suggested, we incorporated the IAF from [1] into our cVAE. That is, we inserted the IAF subnetwork between the existing encoder and decoder, but didn\\u2019t use the more complex decoder from [1], as it did not improve results and destabilized the training.\\nIntroducing IAF improved results measurably over plain cVAE, at the cost of a larger network. Now, the performance is on par with the INN on the 8 Gaussian mode experiment, but a noticeable gap remains for the inverse kinematics and medical experiments.\\nQualitatively, cVAE-IAF exhibits the same shortcomings as the cVAE, but with reduced magnitude.\", \"the_measurements_from_table_1_for_the_cvae_iaf_model_are_as_follows\": \"\", \"calibration_error\": \"1.40%\", \"map_error_s_o2\": \"0.050 \\u00b1 0.002\", \"map_error_all\": \"0.74 \\u00b1 0.03\", \"map_resimulation_error\": \"0.313 \\u00b1 0.008\\n\\nSampled posteriors for each experiment, comparing INN, cVAE and cVAE-IAF:\", \"https\": \"//i.imgur.com/s2PECtl.jpg\\n\\nWe will upload the revised paper later this week.\\n\\nContrary to intuitive expectations, we (and others) found, that the expressive power of INNs relative to unconstrained networks of comparable size, is not substantially reduced. Differences are subtle, and looking at single experiments in isolation may be misleading.\", \"definitive_statements_should_be_based_on_systematic_comparisons_along_various_degrees_of_freedom\": \"- INNs (trained bi-directionally) vs. auto-encoders (trained for cycle consistency), each with several subtypes and network sizes\\n- different unsupervised losses (adversarial, MMD, maximum likelihood, information theoretical)\\n- different applications and problem sizes\\n\\nIdeally, the experiments should include more traditional Bayesian methods for the prediction of posteriors as well, e.g. accelerated MCMC and Stein point sampling.\\nIt will also be interesting to investigate if novel training or prediction schemes enabled by the INNs\\u2019 tractable Jacobians can compensate for their potentially reduced expressive power.\\n\\nWe are currently setting up such experiments and will report about our findings in a future paper. In the present paper, we would like to keep the focus on demonstrating that high-quality posteriors can be learned with bi-directional training as facilitated by INNs.\\n\\n[1] Kingma, Diederik P., et al. \\\"Improved variational inference with inverse autoregressive flow.\\\" Advances in Neural Information Processing Systems. 2016.\"}",
"{\"title\": \"Thank you for the detailed answers\", \"comment\": \"Thank you for clarifying some misconceptions from my side w.r.t. the astronomy experiment and the resulting misinterpretation of your statements about the posterior distribution.\\n\\nI was, in fact, referring to practical limitations, related to insufficient expressiveness of the model, that may not make it powerful enough to transform arbitrary densities into the factorized normal space. This is also why a better baseline would be a model that is closer to state-of-the-art density models, like IAF or other normalizing flow extensions to vanilla VAEs. \\n\\nIn summary, you are trying to improve conditional density estimation and it is not clear why your proposed method should be the method of choice for this if not compared properly to other state-of-the-art conditional density estimation approaches.\\n\\nCan you please provide your perspective on this and would you be able to add an additional experiment with a more competitive baseline? \\n\\nIt would also be great if you upload a revision to incorporate all the changes you mentioned, so I can better judge the current state and clarity of the manuscript.\"}",
"{\"title\": \"Re: Invertible network with observations for posterior probability of complex input distributions with a theoretical valid bidirectional training scheme.\", \"comment\": \"Thank you very much for your time, and your constructive comments, we are looking forward to further discussions!\\nWe answer your questions and concerns in the following. \\n\\n> \\\"The advantage of INN is not crystal clear to me versus other generative methods such as GAN and VAE.\\\"\\n\\nIt is indeed possible to adapt other network types to the task of predicting conditional posteriors. We are currently setting up experiments for detailed analysis of the respective advantages and disadvantages and will report about these results in a future paper. In the present paper, we focus on demonstrating that high-quality posteriors can actually be learned using bi-directional training as facilitated by INNs.\\n\\nConcerning the comments/questions:\\n1.\\n> \\\"could the authors elaborate on the comparison against cGAN\\\"\\n\\ncGAN generators are at an inherent disadvantage relative to INNs, because they never see ground-truth pairs (x,y) directly -- they are only informed about them indirectly via discriminator gradients. This it not a problem for simple relationships, e.g. between images x and attributes y, and cGANs work very well there. However, it makes learning of complicated forward processes much harder and may cause the resulting posteriors to be inaccurate. Moreover, INNs are forced to embed every training point x somewhere in the latent space, whereas cGAN generators may fail to allocate latent space for some x, because this is never explicitly penalized. This can lead to mode collapse and insufficient diversity.\\n\\n> \\\"Can cGAN be used to estimate the density of X (posterior or not)?\\\"\\n\\ncGANs can in principle do this by choosing a generator architecture with tractable Jacobian (using e.g. coupling layers or autoregressive flow), but we are not aware of published results about this possibility.\\n\\n2.\\n> \\\"For the bidirectional training, did the ratios of the losses (L_z, L_y, L_x) have to be changed, or the iterations of forward/backward trainings have to be changed (e.g., 1 forward, 1 backward vs. 2 forward, 1 backward)?\\\"\\n\\nYes, the weights of the losses are considered as hyperparameters, because the magnitude of MMD-based losses depends on the chosen kernel function. Hyperparameter optimization suggested an up-weighting of MMD-based losses by a factor of 5, to give them approximately equal impact as the supervised loss.\\nFor the iterations, we accumulated gradients over one forward and one inverse network execution before each parameter update. We also tried alternating parameter updates after each forward and backward pass, which resulted in equal accuracy, but was a bit slower. We did not experiment with other ratios than 1:1.\\n\\n3. \\n> \\\"Is this to effectively increase the intermediate network dimensions?\\\"\", \"this_is_precisely_the_reason\": \"It improves the representational power of the INN, as mentioned in Sec. 3.2 and discussed in our response to reviewer 1.\\nAt present, we find this is only necessary for the toy problem in Fig. 2.\\n\\n> \\\"It seems that there needs some way to enforce them to be zero to ensure that the propagation happens only among the entries belonging to the variables of interests (x, y and z).\\\"\\n\\nThis is correct.\", \"we_explicitly_prevent_information_from_being_hidden_in_the_padding_dimensions_in_the_following_way\": \"A squared loss ensures that the amplitudes are close to zero.\\nIn an additional inverse training pass, we overwrite the padding dimensions with noise of the same amplitude, and minimize their effect via a reconstruction loss.\\nWe will add this to the relevant paragraph in the paper.\\n\\n4.\\n> \\\"I am curious if this model could succeed on higher dimensional data\\\"\\n\\nWorks such as [1, 2, 3] (also cited in our paper) have shown that the coupling layer architecture in general works well with images. These works use maximum likelihood training, i.e. exploit the tractable Jacobians to maximize the likelihood of the data embedding in latent space. To scale-up our approach, we may need to replace MMD loss with maximum likelihood as well, and first experiments with this show promising results, see\", \"https\": \"//i.imgur.com/ft09Pk9.png .\\n\\n[1] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. arXiv:1605.08803, 2016.\\n[2] Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv:1807.03039, 2018\\n[3] Schirrmeister, Robin Tibor, et al. \\\"Generative Reversible Networks.\\\" arXiv:1806.01610, 2018\"}",
"{\"title\": \"Re: Constraining models to enable approximate posterior inference\", \"comment\": \"Thank you very much for your time, and your constructive comments, we are looking forward to further discussions!\\nWe answer your questions and concerns in the following. \\n\\n> \\\"However, I do not understand how are the *discrete* output y is handled.\\\"\\n\\nFor this toy problem, we represent labels y by standard one-hot encoding, and we directly regress one-hot vectors using squared loss instead of softmax. This allows us to input one-hot vectors into the inverted network to generate conditional x-samples.\\n\\n> \\\"I\\u2019m not sure I understand what we are supposed to learn from the astrophysics experiments.\\\"\\n\\nWe included this experiment to demonstrate that we are able to find multi-modal posteriors in a second real-world setting relevant to natural science.\\n\\n> \\\"INN outperforms other methods [...] over some metrics such as the error on parameters recovery (Table 1), calibration error, and does indeed have a approximate posterior which seems to correspond to the ABC solution better\\\"\\n\\nWe indeed consider the calibration errors (reported in Sec. 4.2 (\\u201cQuantitative results\\u201d) and Appendix Sec. 6) the most meaningful of these comparisons, because they directly measure the quality of the estimated posterior distributions, and INNs have a clear lead here.\\nWe will add these numbers to Table 1 to emphasize their importance.\\n\\n> \\\"However, the real-world experiments are not necessarily the easiest to read.\\\"\\n\\nWe understand, although we tried our best to condense the complicated nature of these applications. For the astrophysics setting, we provide more information in the appendix, Sec. 5, and for the medical application we refer to [1] for full details.\\n\\n[1] Wirkert et al.: Robust near real-time estimation of physiological parameters from megapixel multispectral images with inverse monte carlo and random forest regression. International Journal of Computer Assisted Radiology and Surgery, 2016.\\n(https://link.springer.com/article/10.1007/s11548-016-1376-5 )\"}",
"{\"title\": \"Re: An inspiring idea with weaknesses on theoretical and experimental side (Part 2)\", \"comment\": \"> \\\"The authors claim that specifying a prior/posterior distribution in density modeling is complicated and typically the chosen distributions are too simplistic. This argument is, of course, valid, but they also have the same problem and specify z to be factorial Gaussian. So the same \\\"hen-and-egg\\\" problem applies here.\\\"\\n\\nWe respectfully disagree with this statement.\\nWe only argue that restrictions to the posterior p(x|y) are problematic. In contrast, restricting the latent distribution p(z) to a Gaussian poses no serious limitation, thanks to a theorem in [1]: This paper proves under mild assumptions that any distribution over vectors u can be nonlinearly transformed into a distribution over vectors v, whose elements v_i are independently uniformly distributed in [0,1]^m (\\u201cnonlinear independent component analysis\\u201d). \\nThe uniform distribution can easily be transformed to a Gaussian (or any other desired prior) with standard transformations.\\nTherefore, as long as the neural network is powerful enough and assumptions are fulfilled, it can always realize the transformation from Gaussian p(z) to any arbitrary p(x|y) at any desired accuracy. \\nNote that these properties are not specific to our INN setup, but apply to all models of \\u201cnormalizing flow\\u201d-type.\\n\\n[1] A. Hyv\\u00e4rinen and P. Pajunen. Nonlinear Independent Component Analysis: Existence and Uniqueness results. Neural Networks 12(3): 429--439, 1999.\\n(https://www.cs.helsinki.fi/u/ahyvarin/papers/NN99.pdf )\\n\\n> \\\"MMD does not easily scale to high-dimensional problems, this is not a problem here as all artificial problems considered are very low-dimensional. But when applying the proposed algorithm in realistic settings, one will likely need extensions of MMD, like used in MMD GANs, which would introduce min/max games on both sides of the network.\\\"\\n\\nOur paper intentionally includes two real-world examples in order to demonstrate that there are plenty of low-dimensional applications, which will directly profit from our MMD-based solution. \\nScaling MMD to high dimensions is indeed not easy, and other losses (maximum likelihood, adversarial) may be superior.\\nThe following figure shows preliminary results of a forthcoming paper on this subject, where we train using maximum likelihood in conjunction with a supervised classification loss, to enable conditional generation by INNs:\", \"https\": \"//i.imgur.com/ft09Pk9.png\\n\\n> \\\"- Some basic citations on normalizing flows seem to be missing, e.g. [2,3].\\\"\\n\\nThank you for pointing these out. It is fascinating to see that some key ideas were already invented 25 years ago. We will add these references.\\n\\n> \\\"- How does one guarantee that padded regions are actually zero on output when padding input with zeros? Small variance in those dimensions could potentially code important information. Is this considered as part of y or z? \\\"\", \"we_explicitly_prevent_information_from_being_hidden_in_the_padding_dimensions_in_the_following_way\": \"* A squared loss ensures that the amplitudes are close to zero.\\n* In an additional inverse training pass, we overwrite the padding dimensions with noise of the same amplitude, and minimize their effect via a reconstruction loss.\\nNote that zero padding of the input is only necessary for the toy problem in Fig. 2, because the width of the resulting network would be too small otherwise.\\nWe consider the padding part of y, as it has a supervised loss.\\nWe will add this to the relevant paragraph in the paper.\\n\\n> \\\"- The authors require the existence of inverse and set this equal to bijectivity, but injectivity would be sufficient.\\\"\\n\\nWe think that bijectivity is required for bi-directional training to be well-defined.\\nSince the coupling architecture is bijective by construction, the distinction has no practical implications for our method.\\n\\n> \\\"- The authors mention that z is conditioned on y, but in their notation, the conditional density p(z|y) never shows up explicitly. It should be made clear, that p(z)=p(z|y) is a consequence of their additional MMD penalty and only holds at convergence.\\\"\\n\\nYou are right, we will make this clear in our revised text.\\n\\n> \\\"[...] I am a bit worried that if the proposed model is making some strange mistakes on artificial toy-data, how well it will perform on challenging realistic problems.\\\"\\n\\nWe feel that this statement might be due to the misunderstandings discussed in the answers above.\\nThere is no indication, quantitatively or otherwise, that our model is behaving incorrectly or unexpectedly in any of the experiments.\\nIf this does not answer your concerns, we will be happy to provide further clarifications and additional data.\"}",
"{\"title\": \"Re: An inspiring idea with weaknesses on theoretical and experimental side\", \"comment\": \"Thank you very much for your time, and your constructive comments, we are looking forward to further discussions!\\nWe answer your questions and concerns in the following. \\nNote that we split the response into two comments, due to the 5000 character limit.\\n\\n> \\\"The inverse kinematics experiment shows that the posterior collapses from large uncertainty to almost a point for the right-most joint. This seems like a negative result to me.\\\"\\n\\nThis comment made us realize that the description/illustration of experiment 2 may not have been clear enough.\\nThe rightmost circle marker is not a joint, but the end effector (\\u2018hand\\u2019) of the arm.\\nThe conditioning variable y is the position of this hand.\\nTherefore, having the hand located on or near the gray cross is the desired outcome of the experiment, not a failure.\\nThe thick contour line does not represent the posterior p(x|y), but indicates the re-simulation error: It is the 97%-confidence region of the model\\u2019s end-point distribution p(y|y_target) = integral p(y|x) p(x|y_target) dx and should be as small as possible (ideally, a delta(y - y_target) is desired).\\nThe ABC result (leftmost panel) is essentially the ground truth posterior. \\nWe will replace Fig. 3 with the following improved illustration, to clarify the setup and show what the arm\\u2019s degrees of freedom are:\", \"https\": \"//i.imgur.com/nNMdwPA.png\\n\\n> \\\"The medical experiment also seems rather limited, because if I understand correctly the tissue data is artificial and the proposed INN only outperforms competitors (despite ABC) on two out of three measurements. \\\"\", \"concerning_the_artificial_nature_of_the_medical_experiment\": \"Medical researchers must resort to simulation, because so far there is no way to create real training data from living tissue.\\nThese simulations are sufficiently realistic that they are currently used in clinical trials during actual surgery, albeit only with point estimate methods. \\nThe medical scientists consider our approach a major leap forward, because our full posteriors allow them to quantify uncertainty reliably and efficiently for the first time, especially regarding possible ambiguities arising from multi-modal posteriors.\", \"concerning_the_performance_measures\": \"To compare posteriors, the calibration errors reported in Sec. 4.2 (\\u201cQuantitative results\\u201d) and Appendix Sec. 6 are the most meaningful performance metrics, and the INN has a clear lead here.\\nWe will add these numbers to Table 1 to emphasize their importance.\\nThe numbers in the current Table 1 refer to MAP estimate accuracy, where alternative methods may be competitive, even if their estimated posteriors or uncertainties are inferior.\\n\\n> \\\"In the astronomy experiment figure 4 shows strong correlations between some of the z variables, the authors claim that this is a feature of their method, but I argue that they should not be present if training with the factorial prior was successful. It would be good to show the correlation between y and z variables as well if they show high dependencies, learning was not very successful.\\\"\\n\\nThere seems to be a misunderstanding, the paper does not show the correlation matrix of the latent z variables. \\nInstead, the matrices in Figs. 4 and 5 (right) show the correlation of the x-variables for some fixed y.\\nIt is a distinguishing feature of our method that we can uncover correlations in the posterior p(x|y), which are not visible in the marginals p(x_i|y) or a mean-field approximation.\\nWe verify correctness of the correlations in Fig. 4 via comparison to (expensive) ABC.\\n\\n> \\\"The authors also seem to suggest that they are the first to train flow-based models in forward and inverse direction, but this has already been done in the flow-GAN paper [1]. \\\"\\n\\nThank you for pointing out that their \\u2018hybrid\\u2019 strategy is equivalent to bi-directional training. We will change the related work and Sec. 3.3, to properly appreciate their pioneering contributions. Note that we did not make any claims to be the first to use bi-directional training.\"}",
"{\"comment\": \"Hi,\\n\\ninteresting paper.\\n\\nJust wanted a small reference from our work, that also uses a loss at both ends of the network, albeit only heuristically motivated:\\nTraining generative reversible networks, ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models, https://arxiv.org/abs/1806.01610\\n\\nMaybe you can find it interesting since you also use a loss at both ends of the network.\\n\\nBest,\\nRobin\", \"title\": \"Related work using loss at both ends of invertible network\"}",
"{\"title\": \"An inspiring idea with weaknesses on theoretical and experimental side\", \"review\": \"1) Summary\\n\\nThe authors propose to use invertible networks to solve ambiguous inverse problems. This is done by training one group of Real-NVP output variables supervised while training the other group via maximum likelihood under a Gaussian prior as done in the standard Real-NVP. Further, the authors suggest to not only train the forward model, but also the inverse model with an MMD critic, similar to previous works that used a more flexible GAN critic [1].\\n\\n2) Clarity\\n\\nThe paper is easy to understand and the main idea is well-motivated. \\n\\n3) Significance\\n\\nThe main contribution of this work is of conceptual nature and illustrates how invertible networks are a promising framework for many inverse problems. I really like the main idea and think it is inspiring. However, the experiments and technical contributions are rather limited. \\n\\nTheoretical / ML contribution: \\n\\nUsing an MMD to factorize groups of latent variables is well-known and combining flow-based maximum likelihood training in the forward model with GAN-like objectives in the inverse model has been done before as well.\", \"experimental_contribution\": \"I am not fully convinced by the experiments. \\nThe inverse kinematics experiment shows that the posterior collapses from large uncertainty to almost a point for the right-most joint. This seems like a negative result to me. \\nThe medical experiment also seems rather limited, because if I understand correctly the tissue data is artificial and the proposed INN only outperforms competitors (despite ABC) on two out of three measurements. Further, the authors should have explained the experimental setup of the tissue experiment better, as it is not a standard task in the field. \\nIn the astronomy experiment figure 4 shows strong correlations between some of the z variables, the authors claim that this is a feature of their method, but I argue that they should not be present if training with the factorial prior was successful. It would be good to show the correlation between y and z variables as well if they show high dependencies, learning was not very successful. Simply eyeballing the shape of the posterior is not enough to conclude independence. \\n\\nIn summary, even though interesting, the significance of the experimental results is hard to judge and I am a bit worried that if the proposed model is making some strange mistakes on artificial toy-data, how well it will perform on challenging realistic problems. \\n\\n4) Main Concerns\\n\\nThe authors claim that specifying a prior/posterior distribution in density modeling is complicated and typically the chosen distributions are too simplistic. This argument is, of course, valid, but they also have the same problem and specify z to be factorial Gaussian. So the same \\\"hen-and-egg\\\" problem applies here.\\n\\nThe authors also seem to suggest that they are the first to train flow-based models in forward and inverse direction, but this has already been done in the flow-GAN paper [1].\\n\\nMMD does not easily scale to high-dimensional problems, this is not a problem here as all artificial problems considered are very low-dimensional. But when applying the proposed algorithm in realistic settings, one will likely need extensions of MMD, like used in MMD GANs, which would introduce min/max games on both sides of the network. This will likely be hard to train and constitutes a fundamental limitation of the approach that needs to be discussed.\\n\\n5) Minor Concerns\\n\\n- Some basic citations on normalizing flows seem to be missing, e.g. [2,3].\\n- How does one guarantee that padded regions are actually zero on output when padding input with zeros? Small variance in those dimensions could potentially code important information. Is this considered as part of y or z?\\n- The authors require the existence of inverse and set this equal to bijectivity, but injectivity would be sufficient.\\n- The authors mention that z is conditioned on y, but in their notation, the conditional density p(z|y) never shows up explicitly. It should be made clear, that p(z)=p(z|y) is a consequence of their additional MMD penalty and only holds at convergence.\\n\\n[1] Grover et al., \\\"Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models\\\"\\n[2] Tabak and Turner, \\\"Density estimation by dual ascent of the log-likelihood\\\"\\n[3] Deco and Brauer, \\\"Nonlinear higher-order statistical decorrelation by volume-conserving neural architectures\\\"\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Invertible network with observations for posterior probability of complex input distributions with a theoretical valid bidirectional training scheme.\", \"review\": \"While the invertible model structure itself is essentially the same as Real-NVP, the use of observation variables in the framework with theoretically sound bidirectional training for safe use of the seemingly na\\u00efve inclusion of y (i.e., y and z can be independent). Its abilities to model the posterior distributions of the inputs are supported by both quantitative and qualitative experiments. The demonstration on practical examples is a plus.\\n\\nThe advantage of INN, however, is not crystal clear to me versus other generative methods such as GAN and VAE. This is an interesting paper overall, so I am looking forward for further discussions.\", \"pros\": \"1.\\tExtensive analyses of the possibility of modeling posterior distributions with an INN have been shown. Detailed experiment setups are provided in the appendix.\\n\\n2.\\tThe theoretical guarantee (with some assumptions) of the true posterior might be beneficial in practice for relatively low-dimensional or less complex tasks.\\n\\nComments/Questions:\\n1.\\tFrom the generative model point of view, could the authors elaborate on the comparison against cGAN (aside from the descriptions in Appendix 2)? It is quoted \\u201ccGAN\\u2026often lack satisfactory diversity in practice\\u201d. Also, can cGAN be used estimate the density of X (posterior or not)?\\n\\n2.\\tFor the bidirectional training, did the ratios of the losses (L_z, L_y, L_x) have to be changed, or the iterations of forward/backward trainings have to be changed (e.g., 1 forward, 1 backward vs. 2 forward, 1 backward)? This question comes from my observation that the nature of the losses, especially for L_y vs. L_y,L_x (i.e., SL vs. USL) seem to be different.\\n\\n3.\\t\\u201cwe find it advantageous to pad both the in- and output of the network with equal number of zeros\\u201d: Is this to effectively increase the intermediate network dimensions? Also, does this imply that for both forward and inverse process those zero-padded entries always come out to be zero? It seems that there needs some way to enforce them to be zero to ensure that the propagation happens only among the entries belonging to the variables of interests (x, y and z).\\n\\n4.\\tIt seems that most of the experiments are done in relatively small dimensional data. This is not necessarily a drawback, I am curious if this model could succeed on higher dimensional data (e.g., image), especially with the observation y.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Constraining models to enable approximate posterior inference\", \"review\": \"The authors propose in this paper an approach for learning models with tractable approximate posterior inference. The paper is well motivated (fast and accurate posterior inference) and the construction of the solutions (invertible architecture, appending vectors to input and output, choice of cost function) well described. From my understanding, it seems this method is also to be compatible with other methods of approximate Bayesian Computation (ABC).\", \"concerning_the_experimental_section\": [\"The Mixture of Gaussians experiment is a good illustration of how the choice of cost functions influences the solution. However, I do not understand how are the *discrete* output y is handled. Is it indeed a discrete output (problem with lack of differentiability)? Softwax probability? Other modelling choice?\", \"The inverse kinematics is an interesting illustration of the potential advantage of this method over conditional VAE and how close it is to ABC which can be reasonably computed for this problem.\", \"For the medical application, INN outperforms other methods (except sometimes for ABC, which is far more expensive, or direct predictor, which doesn\\u2019t provide uncertainty estimates) over some metrics such as the error on parameters recovery (Table 1), calibration error, and does indeed have a approximate posterior which seems to correspond to the ABC solution better. I\\u2019m not sure I understand what we are supposed to learn from the astrophysics experiments.\", \"The method proposed and the general problem it aims at tackling seem interesting enough, the toy experiments demonstrates well the advantage of the method. However, the real-world experiments are not necessarily the easiest to read.\"], \"edit\": \"the concerns were mostly addressed in the revision.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Re: Several questions about claims, text clarifications\", \"comment\": [\"Thank you for your interest and your comment. We address your questions in order:\", \"We are not sure what is meant by this question. We simply concatenate y and z into a single vector, and compute the derivatives with respect to this.\", \"We use u and v to generically denote the in- and output of each coupling block. For instance, u = x for the first coupling block, and v = [y,z] for the last.\", \"This is correct. However, as illustrated in the image below, each coupling block consists of two affine transformations. The first of these has an upper triangular Jacobian, and the second has a lower triangular Jacobian. The argument concerning the triangular Jacobians applies to each affine transformation separately. A more in-depth look at the Jacobians of affine coupling layers can be found in Dinh et al. (https://openreview.net/forum?id=HkpbnH9lx Sec. 3.2 and 3.3).\"], \"schematic_illustration_of_coupling_block\": \"\", \"https\": [\"//i.imgur.com/XdccxeA.png\", \"As far as we know, we are the first to apply loss functions on both ends of the same network. Our ablations in Fig. 2 and Table 1 show that the method works best when making full use of that. On the practical side, we perform a parameter update once gradients from all loss terms have been accumulated -- an approach also known from GAN training. In our experiments, we found that alternating forward and inverse parameter updates did not affect training results, but increased training time by ~5%.\", \"L_z is defined as the MMD between the network outputs q(y, z), and the target distribution p(y, z). In our case, y and z are explicitly independent in the target distribution: p(y, z) = p(y)p(z). When the MMD converges to zero, q is necessarily equal to p, therefore the y and z outputs are asymptotically independent. At present, we do not explicitly differentiate between residual dependency of y and z, and other types of mismatch between the distributions in the case of non-zero loss.\", \"The network architecture depends on two problem characteristics: Problem dimensionality dictates the width of the layers, and the complexity of the forward process we wish to learn determines the required depth. We did a coarse grid search to roughly determine the smallest network needed for each application. We will supply ablation studies showing the effect of a larger or smaller number of coupling layers for each of our applications in the following days.\", \"This is true, the influence of L_x is felt on finite training sets. We meant to say that it plays a smaller role in Table 1 than e.g. in Fig. 2. We will correct our wording in the relevant sections.\"]}",
"{\"comment\": [\"Hi,\", \"the work has few interesting extensions to invertible networks. However, there are some questions raised when studying it:\", \"In eq. 3 the authors express the determinant of the jacobian, but it is not clear what partitioning they\\u2019ve done to y, z or to x space.\", \"The variable u seems self-defined, what is the relationship with the x, y, z in the previous section?\", \"In eq 4, v2 is a function of v1 (which depends on u1) so how come the partial derivative dv2 / du1 = 0? (i.e. how come we end up in a triangular jacobian)\", \"The forward and backward iterations (sec. 3.3) are not mentioned in similar works. Could the authors share their experience and or some experimental results of how those help?\", \"The authors mention that Lz enforces y and z to be independent. Is there any proof of that? Or did you measure it somehow in the test results?\", \"An ablation study justifying all the implementation choices would help. For instance about different architectures of their model, e.g. it seems quite confusing how many invertible blocks are required for similar dimensionality problems. How were those discovered by the authors?\", \"Also, the authors mention that Lx contributes marginally, but Table 1 shows that without Lx, the results are worse than all the external compared methods.\"], \"title\": \"Several questions about claims, text clarifications\"}"
]
} |
|
H1gupiC5KQ | The wisdom of the crowd: reliable deep reinforcement learning through ensembles of Q-functions | [
"Daniel Elliott",
"Charles Anderson"
] | Reinforcement learning agents learn by exploring the environment and then exploiting what they have learned.
This frees the human trainers from having to know the preferred action or intrinsic value of each encountered state.
The cost of this freedom is reinforcement learning is slower and more unstable than supervised learning.
We explore the possibility that ensemble methods can remedy these shortcomings and do so by investigating a novel technique which harnesses the wisdom of the crowds by bagging Q-function approximator estimates.
Our results show that this proposed approach improves all three tasks and reinforcement learning approaches attempted.
We are able to demonstrate that this is a direct result of the increased stability of the action portion of the state-action-value function used by Q-learning to select actions and by policy gradient methods to train the policy.
| [
"reinforcement learning",
"ensembles",
"deep learning",
"neural network"
] | https://openreview.net/pdf?id=H1gupiC5KQ | https://openreview.net/forum?id=H1gupiC5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgEj32geV",
"rylFGfkfTX",
"rygPQvCc37",
"rJxyZpEH37"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544764571706,
1541693968866,
1541232414864,
1540865270882
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper815/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper815/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper815/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper815/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper suggests using an ensemble of Q functions for Q-learning. This idea is related to bootstrapped DQN and more recent work on distributional RL and quantile regression in RL. Given the similarity, a comparison against these approaches (or a subset of those) is necessary. The experiments are limited to very simple environment (e.g. swing-up and cart-pole). The paper in its current form does not pass the bar for acceptance at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The paper can be improved\"}",
"{\"title\": \"Not enough novelty\", \"review\": \"This paper proposes a cute idea as suggesting ensembles of Q-function approximations rather than a singular DQN.\\n\\nHowever, at the core of it, this boils down to previously studied methods in the literature, one of which also is not cited here: \\n\\n@inproceedings{osband2016deep,\\n title={Deep exploration via bootstrapped DQN},\\n author={Osband, Ian and Blundell, Charles and Pritzel, Alexander and Van Roy, Benjamin},\\n booktitle={Advances in neural information processing systems},\\n pages={4026--4034},\\n year={2016}\\n}\\n\\nExperiments provided in this paper compares with only the weak baseline of single DQN, however, it fails to compare other similar ideas in the literature such as the above paper. Hence, this paper lacks enough novelty for publication, and it is not clear from the experiments that the specific method proposed in this paper is better than others in the SOTA.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea while the experiments are not enough.\", \"review\": \"This paper proposes the deep reinforcement learning with ensembles of Q-functions. Its main idea is updating multiple Q-functions, instead of one, with independently sampled experience replay memory, then take the action selected by the ensemble. Experimental results demonstrate that the proposed method can achieve better performance than non-ensemble one under the same training steps, and the decision space can also be stabilized.\\n\\nThis paper is well-written. The main ideas and claims are clearly expressed. Using ensembles of Q-function can naturally reduce the variance of decisions, so it can speed up the training procedure for certain tasks. This idea is simple and works well. The main contribution is it provides a way to reduce the number of interactions with the environment. My main concern about the paper is the time cost. Since the method requires updating multiple Q-functions, it may cost much more time for each RL time step, so I\\u2019m not sure whether the ensemble method can outperform the non-ensemble one within the same time period. This problem is important for practical usage. However, the authors didn\\u2019t show these results in the paper.\", \"minor_things\": \"+The main idea is described too sketchily. I think more examples, such as in section 8.1, should be put in the main text.\\n+Page6 Line2, duplicated \\u2018the\\u2019.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"review report\", \"review\": \"This paper introduces an ensemble version of Deep RL by bagging Q-function approximation estimates. In the experiments, the performance of the proposed work is compared to the baseline, single DQN. In spite of the contribution, this paper has a critical issue.\\n\\nIt has been extensively studied in the literature that ensemble DQN could lead to better performance than a single DQN. See the seminal work by Osband et al. (2016). The authors did not cite this paper, not to say a long list of recent works who have cited this seminal work. This indicates that the authors fail to conduct a serious literature review. In addition, more comprehensive experiments are required to compare the proposed work with the state-of-the-art ensemble DQN methods.\\n\\nOsband et al. (2016), Deep Exploration via Bootstrapped DQN. NIPS.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1gOpsCctm | Learning Finite State Representations of Recurrent Policy Networks | [
"Anurag Koul",
"Alan Fern",
"Sam Greydanus"
] | Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability. | [
"recurrent neural networks",
"finite state machine",
"quantization",
"interpretability",
"autoencoder",
"moore machine",
"reinforcement learning",
"imitation learning",
"representation",
"Atari",
"Tomita"
] | https://openreview.net/pdf?id=S1gOpsCctm | https://openreview.net/forum?id=S1gOpsCctm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1eQgdnxx4",
"Skx4XkYBA7",
"Bkl17B4Yp7",
"rkeFP2BDp7",
"S1xiusCLT7",
"B1lSGQhLaQ",
"HyxpmBrBTX",
"HkgFJXeEp7",
"H1ea2Me4pX",
"r1g9dMeNa7",
"Skg3Cr8_3m",
"HJgliFSL3Q",
"BJglR0elhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544763370807,
1542979356023,
1542173975068,
1542048864577,
1542019954566,
1542009612767,
1541915941027,
1541829345389,
1541829300553,
1541829234173,
1541068244476,
1540934039783,
1540521672148
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper814/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/Authors"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper814/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses the problem of interpreting recurrent neural networks by quantizing their states an mapping them onto a Moore Machine. The paper presents some interesting results on reinforcement learning and other tasks. I believe the experiments could have been more informative if the proposed technique was compared against a simple quantization baseline (e.g. based on k-means) so that one can get a better understanding of the difficulty of these task.\\n\\nThis paper is clearly above the acceptance threshold at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Changes log\", \"comment\": \"Thanks to all reviewers for their comments and suggestions.\", \"we_have_made_slight_changes_in_the_paper\": \"[Reviewer 1] - Cited Rule Extraction Work in Related Work Section. \\n\\t[Reviewer 1] - Added DQN, A3C scores in Table 3 for policy performance comparison with trained Atari policies.\\n\\t[Reviewer 2] - Corrected \\\"Grammar\\\" spelling in Appendix.\"}",
"{\"title\": \"Response to \\\"More Comments\\\"\", \"comment\": \"Regarding:\\n\\n\\\"It's interesting that training the quantization is faster than the original model, I didn't expect it to be so.\\\"\\n\\nThere are two likely reasons for this. \\n\\nFirst, supervised learning is generally easier than reinforcement learning. For example, training an RNN for Atari games can take several hours with lots of parallelism, or many hours without parallelism. Rather, given a trained RNN, the training of quantized autoencoders (QBNs) is effectively a supervised problem (minimize reconstruction error) and then fine-tuning is also a supervised problem (mimic the original RNN). \\n\\nSecond, learning an RNN from scratch for challenging problems, whether it is supervised learning or reinforcement learning, is significantly more challenging than just learning the quantized autoencoders (which avoids BP through time) and then fine-tuning (starts from a good place in parameter space). \\n\\nRegarding, \\n\\n\\\"This could reveal if the RNN remembers more than there is to remember or alternatively if it ignores certain parts of the true state space which might be mostly unnecessary.\\\"\\n\\nIn Atari, the learned MMs are certainly remembering much less than the full state of the game. The extreme case is Pong, where, as described in the paper, the MM does not use any real memory (see \\\"Understanding Memory Use\\\" on pg. 9). Table 3 also shows that most of the time the number of MM states is less than 10 and at most 33 in our experiments. So this would translate to about 5 bits of memory compared to the 128 bit Atari RAM. \\n\\nRegarding, \\n\\n\\\"design a simple POMDP where the optimal strategy requires counting a real-valued number\\\"\\n\\nFor problems where significant counting, or more generally, value accumulation is required, then Moore Machines are probably not the best type of representation to extract. In such cases, models such as Petri Nets might be more appropriate. \\n\\nIn general, there are a variety of qualitatively different ways that memory can be used in a recurrent system. We expect that there will be an interesting line of research focused on understanding the primary usage classes and developing corresponding extraction approaches (e.g. Petri Nets). It may be the case, that the QBN-insertion approach here can be a schema for such developments (<your favorite structure>-insertion). \\n\\nRegarding, \\n\\n\\\"make me change my score from 6 to 7 is if the authors can convince me that their method could be used to inform hyperparameter search for RNNs\\\"\\n\\nNaturally, we would love to say something useful here, but are not exactly sure what the reviewer has in mind. \\n\\nOur extraction approach is applied after an RNN has been learned. So in that sense, it would not directly inform hyperparameter search for RNNs if applied directly. \\n\\nWe have observed in some cases that when a trained RNN R1 does not generalize as well as another trained RNN R2, that the MM extracted for R2 is more compact than that for R1. For example, see Grammar #5 in Table 2. When using 8 versus 16 memory bits we get 96 versus 100 percent accuracy. The corresponding minimized MMs have 115 versus 4 states. \\n\\nThis provides a bit of evidence that given two RNNs that perform similarly on validation data, we might prefer to use the one that results in a more compact MM. This makes some intuitive sense, but is at best a hypothesis at this point.\"}",
"{\"title\": \"More comments\", \"comment\": \"It's interesting that training the quantization is faster than the original model, I didn't expect it to be so.\\n\\nIt is true that solving Atari games from RAM might require resources you do not have. Alternatively, you could simply run your already trained agents, collect RAM from those trajectories, and then perform a similar analysis from the RAM states instead of the quantized hidden states. This could reveal if the RNN remembers more than there is to remember or alternatively if it ignores certain parts of the true state space which might be mostly unnecessary.\\n\\nI think one simple experiment that could be done w.r.t. a remark by Reviewer3 is to design a simple POMDP where the optimal strategy requires counting a real-valued number. In such a case I assume your method would fail, but if it doesn't, that might be even more interesting.\\n\\nIn that respect, one thing that would make me change my score from 6 to 7 is if the authors can convince me that their method could be used to inform hyperparameter search for RNNs. Is this something you have observed empirically while building experiments for this paper?\"}",
"{\"title\": \"QBN and fine-tuning are effective ways to make the method scalable\", \"comment\": \"Thanks for pointing out the effectiveness of QBN insertion and fine-tuning. I realize there is a bigger contribution than I thought before. I will change my score from 6 to 7.\"}",
"{\"title\": \"Quantizing RNNs for complex games like Atari is not as simple as the reviewer suggests\", \"comment\": \"The reviewer states:\\n \\n\\\"The classical binary FSMs can be easily adapted to the multi-class classification version. \\u2026.. There is no essential difference between this paper and the large number of literatures on extracting FSMs from RNNs.\\\"\\n \\nThe reviewer's criticism is that there are simple extensions to prior FSM learning techniques that could be used to achieve our results, even the Atari results.\\n \\nThis is a vague hypothesis that requires a careful argument. We give a technical analysis of the hypothesis at the end of the response by detailing our unsuccessful experience with \\u201csimple extensions\\u201d. First we give a higher level response. \\n \\nDiscounting our contributions by requiring comparison to unspecified and non-existent extensions is unfair. Note that 2 of the 3 references suggested by the reviewer [2,3] do not involve extracting FSMs from RNNs (see technical response). \\n \\nSuch extensions are not as straightforward as the reviewer implies. This is a very different from not comparing to a clearly specified off-the-shelf approach or a small change to such an approach. \\n \\nFinally, we hope the reviewers recognize the contribution of demonstrating finite-state extraction for problems as complex as Atari. We were very surprised by the success on Atari, that the state spaces were so small, and that we could determine that some policies did not use memory and that some policies only used memory.\\n \\n---------------------------------------\", \"detailed_technical_response\": \"---------------------------------------\\n \\nWe considered and tried a variety of approaches, starting with \\\"simple\\\" extensions. We would have been happy for any of these to work and would have written about the result. However, failures to get such approaches to work led to our proposed QBN-insertion approach. \\n \\nLet us examine the simple extensions that the reviewer might be considering. We'll divide the discussion into three parts. \\n \\n1) *Reviewer Suggested References [2] and [3]*\\n \\n[1,2] are not about extracting FSMs from RNNs. [2] starts with an FSM and compiles it into an RNN as prior knowledge for bootstrapping. [3] trains RNNs on FSM languages and analyzes their ability to do this. They do not give a method for extracting FSMs. \\n \\n2) *Reviewer Suggested Reference [1]: post-gradient descent clustering/discretization*\\n \\n[1] falls in the class of approaches in \\u201crelated work\\u201d (para 2). It trains an RNN, clusters the internal states (discretization or k-means), then connect states based on empirical data. The resulting FSM is disconnected from the original RNN. We have only seen these approaches applied to relatively simple problems with a small number of discrete inputs. More importantly, when the approach does not give an accurate FSM, there is no easy way to fine tune because it is disconnected from the original network. As our tables show, fine-tuning was essential to achieve good performance on more difficult problems. \\n \\nA simple extension, for problems such as Atari, is to cluster the continuous representation of the input (e.g. output features of CNN). This was the first approach that we tried and were unable to get good results for all but the smallest problems, despite serious attempts. Results were better than random for larger problems, but there is no way to further improve via fine tuning. Thus, we are skeptical that there is a simple extension.\\n \\nQBN-insertion gives a method for getting the required clusterings in a way that can be directly embedded in the RNN for fine-tuning. \\n \\n3) *(Zeng et al., 1993): training binary RNNs from scratch*\\n \\nThis work defines an RNN with discretized memory and trains from scratch. It has not been applied to large problems or to RL. \\n \\nA simple extension would also discretize the input (e.g. by discretizing the output features of a CNN). This was our second attempt. As mentioned in the paper, we did not get it to work for large problems. Training from scratch for Atari was unsuccessful after days. Apparently the discretized nodes make it difficult for RL training signals to effectively propagate. We experimented with some more recent techniques for learning with discrete units, without success. Thus, we are skeptical that there is a simple extension. The failures are mentioned in the paper, but we will make the results more prominent. \\n \\nThese failures led us to the QBN-insertion approach followed by fine-tuning.\"}",
"{\"title\": \"There is no essential difference between Moore Machines and Finite State Machines\", \"comment\": \"The authors mentioned that FSMs are different from Moore Machines (MMs), since Moore Machines must output an action/symbol at each time step, rather than just accepting/rejecting entire strings as is the case for FSMs. In my opinion, there is no essential difference between MMs and FSMs. The main reason is that in FSMs, the accepting/rejecting state reflects a binary classification scenario while the actions/symbols output in MMs reflects a multi-class classification scenario. The classical binary FSMs can be easily adapted to the multi-class classification version.\\n\\nBesides, the authors stress that the input of MMs are complex objects including images or real numbers while FSM can only learn from a discrete alphabet. This is not an issue since in this paper the authors firstly use CNN to encode the complex input into a simple form that MMs can accept, which is similar with the input that FSMs can accept.\\n\\nThe key idea of this paper is to discretize the hidden states and thus the similar hidden states can be grouped together to form a state representing an action. The main contribution is that the authors bring external CNN to encode the complex input into a form that MMs can accept and use a new technique called QBN to do discretization or clustering and apply this idea to the reinforcement learning tasks. There is no essential difference between this paper and the large number of literatures on extracting FSMs from RNNs. Thus, the comparisons between them are feasible with some adaptations on the classical ones.\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thanks for the comments. Below we pull quotes from the review followed by responses.\\n\\n\\\"the authors could make the related works more complete by incorporating these literatures I mentioned.\\\"\", \"re\": \"As we mentioned in the related work, there is no prior work that we are aware of that attempts to learn to transform RNNs into Moore Machines. We would be happy to get pointers to related work that we can compare with.\\n\\nWe included a discussion of work on learning FSMs in the related work, because those techniques are related to our problem. But NONE of the approaches that we are aware of can be applied to our problems without significant innovation. This is due to two reasons: 1) Our inputs are complex objects (images or real numbers) compared to FSM learning where the inputs are from a discrete alphabet, and 2) FSMs are different from Moore Machines, since Moore Machines must output an action/symbol at each time step, rather than just accepting/rejecting entire strings as is the case for FSMs. So FSM approaches are not directly applicable. \\n\\nFor the Grammar Learning benchmarks, prior FSM methods can apply (since actions are just accept/reject). However, here we achieve nearly perfect performance, so a comparison would not shed additional light.\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thanks for the comments. Below we pull quotes from the review followed by responses.\\n\\n\\\"One downside of this paper is that it promises an exciting method to analyse the inner workings of RNNs, but then postpones this analysis to later work.\\\"\", \"re\": \"This is an interesting idea and it does seem possible. The \\\"bottleneck insertion\\\" approach is quite general and can be plugged into a network at any point where quantization seems useful to introduce.\"}",
"{\"title\": \"Response to Review\", \"comment\": \"Thanks for your time and comments. Below we pull quotes from the review followed by responses.\\n\\n\\\"miss to give a position with respect to the work dedicated to extract rules \\u2026. example https://arxiv.org/abs/1702.02540\\\"\", \"re\": \"The third paragraph on page 9 gives 2 examples from Atari where the approach results in a decrease in performance after discretization. We give our best explanation for why this happens there. We agree that it will be interesting future work to design classes of artificial problems, where different complexity parameters can be modified for testing the limits of our approach.\"}",
"{\"title\": \"Interesting in terms of interpretability but unclear practical advantage wrt state of the art\", \"review\": \"Approximation of RNNs is a hot and important topic in term of interpretability and control of nets. The related work section is good but in my opinion miss to give a position with respect to the work dedicated to extract rules from a net which are also way to \\\"interpret\\\" a RNNs - as an example https://arxiv.org/abs/1702.02540 from ICLR'17.\", \"pros\": [\"important practical topic\", \"The papers includes a variety of ideas/tricks which seems to bring performance as the 3 stage procedure and the gradient backpropagation over quantization.\", \"Makes \\\"interpretable\\\" observations of some no so easy to understand nets on Atari games\", \"Reach state of the art performance on artificial set of task\"], \"cons\": [\"The impact of each step is not always assessed by an experiment (especially ones introduced in section 4.1)\", \"The method is never benchmarked against an other one. Neither in terms of performance of the approximation nor in terms of interpretability (thought other techniques are cited in the paper). I understand that this is because this pursue the two goals at the same time but I'd be interested this tradeoff to be more investigated.\", \"Performance on Atari games is usually reported in term of % wrt human performance which helps understanding where we stand. It would be good also to discuss the performance of the RNN on the game wrt other nets. As an example in this paper on space invaders the performance of the RNN is slightly better human but very far from state of the art yielded by prioritized duelling which is almost 10x higher in terms of score. While on breakout they are very good (see https://arxiv.org/pdf/1806.06923.pdf to have a recent list of score on Atari).\", \"I'd been interested in having an artificial task where to proposed algorithm does not succeed (an ideally some discussion on what make the structure recoverable or not).\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a method to learn a quantization of both observations and hidden states in an RNN. Its findings suggest that many problems can be reduced to relatively simple Moore Machines, even for complex environments such as Atari games.\\n\\nThe method works by pretraing an RNN to learn a policy (e.g. through the A3C algorithm), and then training pairs of encoder/decoder networks with a quantizing forward pass and a straight-through backpropagation. The learned quantizations can then be used to build a Moore Machine, which itself can be reduced with FSM reduction algorithms, yielding a discrete, symbolic approximation of the inner workings of RNNs, that could in principle be interpreted more easily than latent embedding spaces.\\n\\nOne downside of this paper is that it promises an exciting method to analyse the inner workings of RNNs, but then postpones this analysis to later work. Understandably, the synthetic experiments take some space and shows that the proposed method works as expected when the problem is amenable to discretization; maybe some parts of this could be in the appendix?\\n\\nAnother downside is that there is little indication of the computational implications of the method. The method was evaluated on a fairly small set of hyperparameters, and there are no indication of how long the optimization and finetuning takes. Presumably, minimizing a Moore Machine has been studied for decades, but how long does minimizing the 1000s of states in Atari games take? A second or an hour?\\n\\nThe paper is fairly well written and easy to understand. The method seems well grounded, although I'm not familiar enough with the quantization literature to detect if something important is missing. I think this is a great tool that hopefully will be used to try to understand the memory mechanisms of RNNs. \\n\\nI think the proposed method (and the fact that it works in simple cases) warrants acceptance, but I think more experimental work would make this a great contribution. Since there is no reason for quantization to improve performance if it is done after training, then more emphasis should be put on the interpretability of the discretization; yet it is lacking in the current work. Some Atari games are known to require various amounts of memory, this could be analysed. Some other Atari games are known to be hard to solve, what happens to the RNN when the agent fails to achieve an optimal policy might also show up in the subsequent discretization and be interesting to analyse.\", \"comments\": [\"In atari, you can have access to the RAM and from it, using exactly the same mechanisms and maybe a bit of tabular MDPs, you should be able to recover the optimal MM.\", \"It is good that the authors report their failure to train MMNs from scratch; IMO this says something about the straight through estimators' limits. Measuring how sensible these things are to change in their target distribution and comparing to previous uses of ST in quantization works could be interesting.\", \"in Section 8 (appendix) \\\"Grammer\\\" should be \\\"Grammar\\\"\", \"All the (PO)MDPs that you analyse arguably have finite state spaces, and you set the ALE to be deterministic. What happens in continuous stochastic environments?\", \"Do you think a similar technique could be used to recover a (possibly stochastic) MDP instead of a Moore Machine? It would be interesting to see MDP reduction methods applied to a learned MDP.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting work and need more comparisons with the most relative works\", \"review\": \"RNNs are difficult to explain, understand and analyze due to the continuous-valued memory vectors and observations features they use. Thus, this paper attempts to extract finite representation from RNNs so as to better interpret or understand RNNs. They introduce a new technique called Quantized Bottleneck Insertion to extract Moore Machines (MM). The extracted MM can be analyzed to improve the understanding of memory use and general behavior on the policies. The experiments on synthetic datasets and six Atari games validate the effectiveness of the proposal.\", \"here_are_my_detailed_comments\": \"Interpreting or understanding RNNs is a very interesting and important topic since RNNs and their variants like LSTM, GRU are widely used in different domains such as reinforcement learning, sentiment analysis, stock market prediction, natural language processing, etc. The more understandable on RNNs, the more trustful on them. In this paper, the authors try to extract more interpretable representation of RNNs, namely Moore Machines (MM). MM is actually a classical finite state automaton. The authors mention that (Zeng et al., 1993) is the most similar work to theirs. In fact a series of works have been proposed to extract finite state automaton, which is similar to (Zeng et al., 1993) such as [1], [2], [3], etc. I think the authors could make the related works more complete by incorporating these literatures I mentioned.\\n \\nBesides, I think this work is a good application of the idea of extraction of RNNs on reinforcement learning since no works have introduced this idea into this domain as far as I know. The authors use the autoencoder named as QBN to quantize the space of hidden states. This is a good operation of clustering or quantizing the space of hidden states since it can be tuned to make the final performance better. The authors also incorporate the minimization of MM to show the probability of shrinking memory which can also make the extracted MM more interpretable. As a result, the policy represented by MM is intuitive and vivid.\\n \\nNevertheless, there is an obvious weak point in this paper. Specifically, the authors claim that the main contribution of this paper is to introduce an approach for transforming RNNs to finite state representations. But I do not see any comparisons between the proposed methods and other relative methods such as the method proposed by (Zeng et al., 1993) to show the effectiveness or improvement of the proposed method. I suggest the authors could incorporate comparisons to make the results more convincing.\\n \\n[1] C. W. Omlin and C. L. Giles, \\\"Extraction of rules from discrete-time recurrent neural networks,\\\" Neural Networks, vol. 9, no. 1, pp. 41\\u201352, 1996.\\n[2] C. W. Omlin and C. L. Giles, \\\"Constructing deterministic finite-state automata in recurrent neural networks,\\\" Journal of the ACM, vol. 43, no. 6, pp. 937-972, 1996.\\n[3] A. Cleeremans, D. Servan-Schreiber, and J. L. McClelland. \\\"Finite state automata and simple recurrent networks.\\\" Neural computation, vol. 1, no. 3, pp. 372-381, 1989.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJ4vTjRqtQ | Dynamic Planning Networks | [
"Norman L. Tasfi",
"Miriam Capretz"
] | We introduce Dynamic Planning Networks (DPN), a novel architecture for deep reinforcement learning, that combines model-based and model-free aspects for online planning. Our architecture learns to dynamically construct plans using a learned state-transition model by selecting and traversing between simulated states and actions to maximize valuable information before acting. In contrast to model-free methods, model-based planning lets the agent efficiently test action hypotheses without performing costly trial-and-error in the environment. DPN learns to efficiently form plans by expanding a single action-conditional state transition at a time instead of exhaustively evaluating each action, reducing the required number of state-transitions during planning by up to 96%. We observe various emergent planning patterns used to solve environments, including classical search methods such as breadth-first and depth-first search. Learning To Plan shows improved data efficiency, performance, and generalization to new and unseen domains in comparison to several baselines. | [
"reinforcement learning",
"planning",
"deep learning"
] | https://openreview.net/pdf?id=SJ4vTjRqtQ | https://openreview.net/forum?id=SJ4vTjRqtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xMm35n14",
"S1ecNr53kN",
"HJxxDN2h07",
"ByxHmN3nAm",
"B1gs36av0m",
"SygCmp6wA7",
"r1e5UKTwC7",
"Syx2OuTP0X",
"BkeuG_G0hX",
"SygUDjunhX",
"HygvTCvLn7",
"Hyx9HvDG2X",
"S1lAPB0g2m"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1544494105682,
1544492337521,
1543451736068,
1543451677509,
1543130546689,
1543130405935,
1543129425825,
1543129203622,
1541445647766,
1541339998015,
1540943550578,
1540679489564,
1540576613965
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper813/Authors"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper813/Authors"
],
[
"ICLR.cc/2019/Conference/Paper813/Authors"
],
[
"ICLR.cc/2019/Conference/Paper813/Authors"
],
[
"ICLR.cc/2019/Conference/Paper813/Authors"
],
[
"ICLR.cc/2019/Conference/Paper813/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper813/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper813/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Thanks\", \"comment\": \"Thank you for your response and score change. We will work on adding a longer training period to our final version to further address your concerns. It will include 3 seeds of 4e7 iterations on the Push domain.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your response. Some of my concerns are addressed. I still wonder the performance after more training steps (4e7). From the figure, even after 4e7 training steps, most of the baseline methods are not converged. I agree the proposed method has improved sample efficiency drastically but there is no proof that the asymptotic performance is acceptable. If you can address this in the final version, I will recommend for acceptance. I adjusted my score accordingly.\"}",
"{\"title\": \"Response to authors (2/2)\", \"comment\": \"> We also feel that our work simplifies the architectural complexity for dynamic tree-based planning. We require fewer components, no memory, and allow end-to-end training.\\n\\nWell, DPN has (1) an outer agent (trained via Q learning), (2) a recurrent inner agent (trained with policy gradient), and (3) a dynamics model (trained via supervision). IBP has (1) a manager (trained with policy gradient), (2) a controller+memory (trained with policy gradient), and (3) a dynamics model (trained via supervision). Having a recurrent IA is a bit simpler than the controller+memory setup but roughly equivalent in terms of the general idea.\\n\\nThough, in sketching out these differences explicitly I am realizing that there is an important difference in terms of architectural choice that I didn't fully appreciate before. Specifically, in both DPN and IBP there are three choices that have to be made: (1) which external actions to take in the real environment, (2) which states to search from, and (3) which internal actions to take during search. DPN has separate policies for internal and external actions (with the internal policy simultaneously deciding which actions to try and from which states), while IBP has one policy for both internal and external actions but a separate policy for deciding which states to search from. It's not obvious to me which of these is a priori better---they both have their advantages and disadvantages (e.g., DPN is able to learn an internal policy which is more exploratory than IBP; but IBP more naturally handles dynamic numbers of planning steps than DPN). While this makes me feel slightly more favorable towards the novelty of the approach, it does make me wish there were a clearer comparison to tease apart these differences.\"}",
"{\"title\": \"Response to authors (1/2)\", \"comment\": \"Thank you for the detailed response. I appreciate the additional experiments and clarifications and agree they do improve the paper. However, I still do not feel like I understand all of the architectural choices or their implications. Therefore, I am unfortunately not inclined to change my score.\\n\\n> We agree that this would help improve the clarity of the paper and have included an ablation study of the targets used by the IA. We examined the results over 3 variants: original, Q (same as OA), and KL.\\n\\nThanks, it is very interesting to see that the KL loss in and of itself is potentially useful.\\n\\n> To clarify the IA does not retain hidden state and is reset between OA action steps. The current state the OA is in always acts as the root. We have added a sentence making this explicit in the paper.\\n\\nPerhaps my question wasn't quite clear---I was referring to the strategy taken by the IA within one OA step. That is, the IA currently has the option of taking an action either from {root, parent, current}. What if this set were different, as in it could only take an action from one of {root, current}? This strategy is more similar to the MCTS search strategy, which either keeps expanding the current node until a leaf is reached or restarts from the root. Similarly, what if the IA could only take an action from {current}? This strategy would be equivalent to performing a single rollout. Or what about just from {root}, which is equivalent to doing several 1-step look aheads? I think these comparisons are important to show in the paper because if the {root, parent, current} strategy is not in fact better than {current} then it suggests the agent hasn't really learned a particularly useful planning strategy (or the environment is not interesting enough). If the {root, parent, current} strategy is not better than {root, current} then it limits the novelty over prior work.\\n\\n> We removed this statement as we did not see a clean way to empirically validate it.\\n\\nOk.\\n\\n> Thank you for pointing out the missing work. We have added each of the suggested works to our paper.\\n\\nThanks!\\n\\n> We have included an ablation study that teases apart what the proposed target contributes. We came to an interesting conclusion: the KL component helps stabilize the Q component. Additionally, we added further explanation behind the target after its definition.\\n\\nThanks for adding the additional clarification. However, I'm unfortunately still not sure I understand the motivation behind the KL component. KL is a measure of the distance (or information gain) between two distributions, but h_t^O is not a distribution---all the environments evaluated are deterministic and fully-observable so there is no need to have a distribution over states. Because of this, it's not clear to me why the transition from one state representation to another (h_t^O --> h_{t+1}^O) should be measured via information gain? It seems to me what the KL term is doing is encouraging the agent to search states that are maximally different from the current state under the agent's representation of the world rather than being about information gain. As such, it would seem that, say, maximizing some other distance metric (e.g. L2) between h_t^O and h_{t+1}^O would work equally well. \\n\\n> The intention of darkened images is to show only the planning path in relation to the obscured images. If the planning path (white) and agent (red) are difficult to see we will adjust.\\n\\nIt is very hard for me to see the agent especially, and the darker parts of the planning path are also quite difficult to see.\\n\\n> We have expanded the depth of this citation.\\n\\nThanks.\\n\\n> We feel the difficulty of our environments are significantly harder and our work is a step forward in scaling up dynamic tree-based planning architectures.\\n\\nI agree the environments in the present paper are more challenging (though the gridworld environment in Pascanu et al. was I believe more for probing the generalization behavior of the planning system, not for pushing the limits of difficulty). But evaluating a similar architecture to one that already exists on a slightly harder problem is unfortunately not on its own that compelling.\\n\\n> Our work trains all components end-to-end instead of treating each part as a distinct sub graph.\\n\\nEnd-to-end training is not always better, so I'm not really convinced by this statement without empirical support to back it up. Also, given the strong architectural assumptions and the fact that the policy gradient and entropy maximization are only computed with respect to IA, the present approach is technically not end-to-end either.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for taking the time to review our paper. We appreciate the feedback you have given. We have provided a detail response below.\\n\\n> The dynamics model used to plan is given and fully observable. That means a pure Monte-Carlo tree search can achieve very high accuracy.\\n==================================\\nYes, we agree that the environment model can be given and is full observable. Can you elaborate on why this is considered a con? \\n\\n> In the figure 6, AtreeC can also have good performance after 4e7 steps, even better than the proposed method. I am wondering what would happen if 4e7 steps were applied to the proposed method.\\n==================================\\nWe were tightly constrained by compute and doing several runs over many seeds at this length was unfeasible. As for the overall performance (roughly estimated mean difference is under 10%) we believe an important axis to consider here is the reduction in state-transitions (up to 96%) that gives roughly the same performance. Additionally, if we look at the performance of our model after 20e6 steps we see the ATreeC-1 achieves the same level of performance after ~30e6 steps taking 33% additional steps.\\n\\n> One argument from the paper is that their method is computationally efficient. However, this should be presented in a more realistic test environment.\\n==================================\\nYes, but this is in terms of state-transitions required which were reduce drastically. The environments we tested are relevant to related work (Push: ATreeC+TreeQN & ~I2A) and Gridworld (Pascanu & many others).\\n\\n\\n> In the push and gridworld environment, 84 steps of planning wouldn't be too bad. \\n==================================\\nThis is unfeasible as 84 steps of planning would be incredibly slow (the gradient updates). Additionally, as shown in the ablation of the planning lengths we see diminishing returns in the Push environment after 3 steps of planning.\\n\\n\\n> So a demonstration of the effectiveness from the proposed method on a visually complex game would be great.\\n==================================\\nWe agree that a visually complex environment would be helpful but feel this fits into future work relating to scaling the method up. Additionally, we chose both the Push and Gridworld environments as we could iterate quickly and have reasonable training periods.\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"Thank you for your thoughtful feedback and suggestions for improvement in our paper. We have addressed your points and incorporated your corrections in the paper. We have provided a detail response below.\\n\\n> Though I think the readers could understand better the intuition better if the authors can expand the explanation further. any reference on the idea? \\n===========================\\n- We have included a short discussion of the target below the equation in the paper. \\n- We have provided a section in the supplemental material with a motivating example.\\n- No we do not have a reference, the idea evolved from examining how an agent/human might plan ahead and questioning what the core process might be.\\n\\n>Has the authors try to only set the utility as Q^ or as D_KL only as controls?\\n===========================\\nWe have included another experiment in the paper that performs an ablation of the IA target and compares between Q * D_KL, Q, and D_KL. We found an interesting result: the KL component helps reduce variance during training.\\n\\n> I wonder if this can be even further extended.\\n===========================\\nWe had considered including dynamic planning steps, similar to Graves [1], but we decided to limit the scope of the paper to the core contributions proposed. The other suggestions are quite interesting but we feel the implementation might be difficult and wanted to keep the scope focused. In particular the dynamic set size would be difficult to implement.\\n\\nWe do agree that discussing future would would be helpful and added a short discussion.\\n\\n[1] - https://arxiv.org/abs/1603.08983\\n\\n\\n> 3. \\\"Push is similar to Sokoban used by Weber et. al. (2017) with comparably difficulty\\\". ===========================\\nThe Sokoban environment provides dense rewards similar to Push (per step, box onto/off goal, level completion). This can be found in the Appendix D.1 of the I2A paper.\\n\\n> So stating that L2P learn Push in an order of magnitude less steps in Push compared to I2A learn Sokoban seems a chicken to egg comparison to me.\\n===========================\\nIn both the Push and Sokoban environments the agent can push boxes into a position that it cannot recover from reducing the score and stopping it from solving the level. Some examples of irreversible moves: pushing 4 boxes together in a square or pushing a box against an edge. You are correct that in Push the obstacles are soft and allow both the agent and boxes to move over them and does not contain single floating obstacles like Sokoban.\\n\\nTherefore, we feel the environments are still comparable in difficulty. \\n\\n> 4. Is it possible to run I2A as a baseline in the two environment you tried?\\n===========================\\nWe decided to omit I2As as we had issues replicating their model and the computational requirements were prohibitive given the hardware available while working on this paper.\\n\\n\\n> Re: poor performance of model-free baselines. Can the authors comment more on why this is the case?\\n===========================\\nWe believe that the model-free baselines have a hard time with the non-stationary environment (we generate a new map every episode) and the agent does not learn to do much besides how to avoid obstacles (perhaps due to the sparsely distributed goals).\\n\\n- we performed sanity checks after we saw the poor performance of the model-free baselines by adjusting up the number of unique grids the model-free baselines. We found the model-free baselines all performed well for 1-100 unique maps but performance quickly degraded for a higher number of unique maps. The model-free baselines might have issues handling the huge variance in goal placements, obstacle formations (single boxes, hallways, zig-zag patterns etc. are all possible). Furthermore, we believe that our model performs better because it has a state-transition model that lets it test movement hypothesis and this model captures common structure across all map permutations.\\n\\n- We believe that the model-free agents only learn to navigate around the environment, avoiding obstacles, before a poor action (deliberate or eps-greedy) causes the episode to terminate. This is why we see the resulting scores for the model-free baselines to be -1 + small number: it gets -0.01 per step and -1 for hitting an obstacle (or going off the map).\\n\\nTo help the model-free baselines we increased training time and exploration budget. We tried to give the model-free agents additional help by doubling the number of training steps from 20million to 40million and by also increasing the exploration duration by 4x (4 million to 16 million frames). Neither avenue increased their scores. \\n\\nWe added comments to our paper explaining the attempted training variations used for the model-free baselines.\\n\\n\\n>6. a few possible typos... + caption colors\\n===========================\\nGood catches, we have fixed them. Thank you!\"}",
"{\"title\": \"Thank you for your comments\", \"comment\": \"We would like to thank the reviewer for their helpful and insightful comments. Our responses to the specific concerns follow. We hope that the changes have improved the paper.\\n\\n> I strongly suggest including in a revision a number of ablation experiments to tease these details apart.\\n================================\\nWe agree that this would help improve the clarity of the paper and have included an ablation study of the targets used by the IA. We examined the results over 3 variants: original, Q (same as OA), and KL.\\n\\n> Does the agent achieve worse performance if it has to restart its imaginations from the root of the tree each time, as is more analogous to MCTS and other previous model-based approaches?\\n================================\\nTo clarify the IA does not retain hidden state and is reset between OA action steps. The current state the OA is in always acts as the root. We have added a sentence making this explicit in the paper.\\n\\n\\n> Additionally, there are a few places in the paper where unjustified statements are made. For example, in Section 5.3, the paper states that \\u201cwe hypothesize that focusing, by repeatedly visiting the same state, the IA ensures that the POI is remembered in its hidden state such that the OA can act accordingly, given this information\\u201d.\\n================================\\nWe removed this statement as we did not see a clean way to empirically validate it.\\n\\n\\n> The literature review is missing some related work, particularly from the realm of model-based continuous control. \\n================================\\nThank you for pointing out the missing work. We have added each of the suggested works to our paper.\\n\\n> However, I had a hard time understanding the choice of the inner objective (Equation 1). The paper states that this equation defines the \\u201cvalue of information\\u201d and defines it as the product of the KL from the OA\\u2019s hidden state prior to the transition to after the transition, multiplied by the Q value estimated by the OA and the action probability of the IA. This is very mysterious to me. Why is this a good objective? Why does the KL term of the hidden state of the OA have anything to do with the value of information? Given that the difference in objective of the IA is one of the main contributions of the paper, this choice needs to be justified, explained, and examined.\\n================================\\nWe have included an ablation study that teases apart what the proposed target contributes. We came to an interesting conclusion: the KL component helps stabilize the Q component. Additionally, we added further explanation behind the target after its definition.\\n\\n> The colors in the caption of Figure 5 do not match the colors in the figure.\\n================================\\nFixed.\\n\\n> The colors in Figure 7 are very dark and it is hard to make out what is actually happening in the figure.\\n================================\\nThe intention of darkened images is to show only the planning path in relation to the obscured images. If the planning path (white) and agent (red) are difficult to see we will adjust.\\n\\n> The idea of constructing an imagination tree state-by-state is not particularly novel, and was previously explored by Pascanu et al. (2017). I think this paper deserves more discussion and comparison than it is given in the present work.\\n================================\\nWe have expanded the depth of this citation.\\n\\n> The biggest differences from Pascanu et al. are that the present work uses a separate objective for the inner agent, and allows taking a step backwards and returning to the previous state (whereas Pascanu et al. only allowed imagining from the current imagined state or from the root). So, the overall the paper has some new ideas, but is not highly novel compared to previous work. \\n================================\\nWe agree that the general idea of creating a dynamic tree-based planning is shared between our work and Pascanu et al. but feel that the following differences are significant:\\n- We feel the difficulty of our environments are significantly harder and our work is a step forward in scaling up dynamic tree-based planning architectures. For example within the Gridworld environment: our work uses 16x16 Gridworlds that are randomly generated between episodes ensuring the agent is not able to memorize grid layouts. In contrast Pascanu et al. use a 7x7 gridworld environment with only 4 variants and a perfect environment model.\\n- Our work trains all components end-to-end instead of treating each part as a distinct sub graph.\\n- We also feel that our work simplifies the architectural complexity for dynamic tree-based planning. We require fewer components, no memory, and allow end-to-end training.\"}",
"{\"title\": \"Summary of paper adjustments\", \"comment\": \"We would like to thank each reviewer for taking the time to review our paper, providing insightful comments, and the thoughtful questions asked. We feel that our paper has been strengthened as a result and are grateful for this outcome.\", \"for_the_convenience_of_the_reviewers_and_ac_below_we_summarize_the_changes_to_our_paper\": [\"Changed the name from Learning To Plan to Dynamic Planning Networks\", \"Added additional support to the \\\"why\\\" of the IA target (Added to Sec 2.1)\", \"Fixed equation typo (Equation 5)\", \"Fixed equation typo (End of section 2.2)\", \"Explicitly stated that the IA's hidden state is reset between steps (end of section 2.3)\", \"Added the 4 citations suggested by AnonReviewer3\", \"Expanded citation for Pascanu et al. to include more detail and difference between our technique (suggested by AnonReviewer3)\", \"Updated the caption of Figure 5 (color mismatch)\", \"Added an ablation study of the IA's target (changes to end of section 4 and 5) as suggested by AnonReviewer3 & AnonReviewer1.\", \"Added discussion about future work with dynamic planning lengths at the end of section 5.1 (suggested by AnonReviewer1)\", \"Removed POI comments in section 5.3 (suggested by AnonReviewer3)\", \"Added additional discussion attempted training improvements to model-free baselines in Section 5.4 (from AnonReviewer1).\"], \"after_deadline_edit\": [\"We will adding additional ablation experiments to pull apart the performance gains and motivate the usage of the KL distance (or others). As per reviewer3\\u2019s suggestions.\", \"The experiments on the push environment will be extended to 4e7 steps over 3 seeds instead of 2e7 as per AnonReviewer2\\u2019s suggestions.\"]}",
"{\"metareview\": \"\", \"pros\": [\"Good quantitative results showing clear improvement over other model-based methods in sample efficiency and computational cost (though see Reviewer 2's concerns about the need for more experiments on computational cost).\", \"Cool qualitative results showing discovery of BFS and DFS\", \"Potentially novel approach (see cons)\"], \"cons\": [\"Lack of clarity especially concerning equation (1). Both Reviewers 1 and 3 were unsure of the rationale for this equation which lies at the heart of the method. It looks to me like a combination of surprise and value but the motivation is not clear. There are a number of other such places pointed out by the reviewers where model choices were made that seem ad hoc or not well motivated.\", \"In general it's hard to understand which factors are important in driving the results you report. As Reviewer 3 points out, more ablation studies and analysis would help here. Providing more motivation, explanation and analysis would help the reader understand better the reasons for the performance of the model.\", \"The results are nice and the method is intriguing. I think this potentially a very nice paper and if you can address the above concerns but isn't quite up to the acceptance bar for ICLR this year.\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Potentially very nice paper with clarity issues\"}",
"{\"title\": \"Nice results, but weakened by a mysterious inner objective and lack of novelty\", \"review\": \"This paper proposes a new architecture for model-based deep RL, in which an \\u201cinner agent\\u201d (IA) takes several planning steps to inform an \\u201couter agent\\u201d (OA) which actually acts in the world. The main contributions are to propose a new objective for the IA, and to allow the IA to \\u201cundo\\u201d its imagined actions. Overall I think this could be a great paper, but it needs further justification of some of the architectural choices and more rigorous analysis/experiments before it will be ready for acceptance.\", \"pros\": [\"Nice demonstration of improved data efficiency over existing model-based methods.\", \"Substantial improvement over other model-based methods in terms of computational cost.\", \"Interesting qualitative analysis showing discovery of DFS and BFS-like search procedures.\"], \"cons\": \"- Limited novelty over existing methods.\\n- It is unclear what in the model contributes to improved performance.\\n\\nQuality\\n---------\\n\\nThe results in the paper seem impressive in terms of sample complexity, but I think there needs to be further exploration of the source of the results. I strongly suggest including in a revision a number of ablation experiments to tease these details apart---for example, what do the results look like if the IA uses the same objective as the OA? Does the agent achieve worse performance if it has to restart its imaginations from the root of the tree each time, as is more analogous to MCTS and other previous model-based approaches?\\n\\nAdditionally, there are a few places in the paper where unjustified statements are made. For example, in Section 5.3, the paper states that \\u201cwe hypothesize that focusing, by repeatedly visiting the same state, the IA ensures that the POI is remembered in its hidden state such that the OA can act accordingly, given this information\\u201d. This seems very speculative. It would certainly be very interesting if true, but there needs to be something more than just intuition to back up this hypothesis. I recommend including some probe experiments (e.g., force the IA to take a sequence of such actions, or not, and see what the result on the behavior of the OA is) or removing speculations such as this (or moving it to the appendix).\\n\\nThe literature review is missing some related work, particularly from the realm of model-based continuous control. [1-3] are a few references to start with; these papers take a different approach in that they don\\u2019t use tree search but they are still worthwhile discussing. I think a reference to [4] is also missing, which takes a related approach to learning the decisions needed to perform MCTS.\\n\\nClarity\\n--------\\n\\nOverall, the paper is well-written and I understand what was done and how the architecture works. However, I had a hard time understanding the choice of the inner objective (Equation 1). The paper states that this equation defines the \\u201cvalue of information\\u201d and defines it as the product of the KL from the OA\\u2019s hidden state prior to the transition to after the transition, multiplied by the Q value estimated by the OA and the action probability of the IA. This is very mysterious to me. Why is this a good objective? Why does the KL term of the hidden state of the OA have anything to do with the value of information? Given that the difference in objective of the IA is one of the main contributions of the paper, this choice needs to be justified, explained, and examined. As mentioned above, it would be best if a revision could include some ablation experiments where the choice of this objective is more closely examined.\\n\\nMore broadly, as mentioned above, it is unclear to me what part of the framework results in improved performance. Is it the ability to \\u201cundo\\u201d actions (rather than starting over from the root or exhaustively performing BFS), or is it the KL-based reward given to the IA? The paper does not provide any insight into this question, making it unclear what are the key points I should take away.\", \"minor\": \"- The colors in the caption of Figure 5 do not match the colors in the figure.\\n- The colors in Figure 7 are very dark and it is hard to make out what is actually happening in the figure.\\n\\nOriginality\\n-------------\\n\\nThe objective of the inner agent (Equation 1) appears novel, though as discussed above it is unclear to me what exactly it means and what its implications are. The idea of constructing an imagination tree state-by-state is not particularly novel, and was previously explored by Pascanu et al. (2017). I think this paper deserves more discussion and comparison than it is given in the present work (in particular, compare Figure 3 of the present paper and Figure 2 of Pascanu et al.). In general, the main idea in both papers is the same: have an agent learn to take internal planning steps and construct a planning tree that then informs the final action in the world. The biggest differences from Pascanu et al. are that the present work uses a separate objective for the inner agent, and allows taking a step backwards and returning to the previous state (whereas Pascanu et al. only allowed imagining from the current imagined state or from the root). So, the overall the paper has some new ideas, but is not highly novel compared to previous work. I see the two biggest original contributions as being: (1) the separate objective in the inner agent and (2) the ability for the agent to restart its imagination from the previous imagined state.\\n\\nSignificance\\n----------------\\n\\nThe results reported by the paper are significant in that they do show dramatic improvement in sample complexity over existing model-free methods, as well as improvement in computational cost over existing model-based methods. However, as discussed above, it is hard for me to know what conclusions I should draw from the paper in terms of what aspects of the approach drive this performance. Thus, I think the lack of clarity in this respect limits the significance of the paper.\\n\\n[1] Finn, C., & Levine, S. (2017). Deep visual foresight for planning robot motion. In Proceedings of the International Conference on Robotics and Automation (ICRA 2017).\\n[2] Srinivas, A., Jabri, A., Abbeel, P., Levine, S., & Finn, C. (2018). Universal planning networks. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018).\\n[3] Henaff, M., Whitney, W., & LeCun, Y. (2018). Model-based planning with discrete and continuous actions. arXiv preprint arXiv:1705.07177\\n[4] Guez, A., Weber, T., Antonoglou, I., Simonyan, K., Vinyals, O., Wierstra, D., \\u2026 Silver, D. (2018). Learning to search with MCTSnets. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting proposal of flexible model-based planning\", \"review\": \"I think the ideas proposed in this paper are interesting. The paper is quite clearly written and the authors have provided a thorough reviews of related works and stated how the current work is different. I think this work has some significance for model-based reinforcement learning, as it provided us with a new adaptive way to rollout the simulation. I see the the work as a nice extension/improvement of the I2A (Weber et. al. 2017) and the ATreeC/TreeQN work (Fraquhar et. al. 2017). As the authors pointed out, the L2P agent can adaptive rollout different trajectories by choosing to move back to the root (start state) or move one step backward in the tree (regret last planned action). This is different from ATreeC/TreeQN where the whole tree is expanded in BFS way, and from I2A where rollouts are linear for each possible actions at current state.\\nI have a bit doubt about the experimental results though. The levels used to evaluate seems quite simple, and I wonder the baseline model-free agents are not properly tuned or are not trained long enough to be fair. I have a list of questions detailed below:\\n\\n1. The IA is trained with utility that is a measure of \\\"value of information\\\" provided to the OA. I think this is a cool idea. Though I think the readers could understand better the intuition better if the authors can expand the explanation further. any reference on the idea? Why it has the form of Q^ * D_KL, for example why not Q^ + D_KL? Has the authors try to only set the utility as Q^ or as D_KL only as controls?\\n\\n2. One key part of this model is that during IA's unroll, the agent will choose z* from (z^p, z^c, z^r} (previous, current, root states), and then choose an action to unroll from z*. I wonder if this can be even further extended. For example, one possibility is that the agent can have z* set to any z in the tree that has already expanded. Or, another possibility is that the agent can have z* set to any z along the path from current node to the root (i.e. regret k-steps). Also, would it be possible to have a dynamic planning steps? These suggestions may be practically hard to work properly, but may worth discuss.\\n\\n3. \\\"Push is similar to Sokoban used by Weber et. al. (2017) with comparably difficulty\\\". I cannot quite be convinced by this statement. Any quantification or evidence to support this sentence? To me, Sokoban seems to be much harder, as the agent need to solve the whole level to get score and can get stuck if making a single bad decision, while the Push seems much more tolerable (a lot of boxes, the obstacles is softly defined.) So stating that L2P learn Push in an order of magnitude less steps in Push compared to I2A learn Sokoban seems a chicken to egg comparison to me.\\n\\n4. Is it possible to run I2A as a baseline in the two environment you tried?\\n\\n5. I don't quite understand why DQN-{Deep, Wide} perform badly in the Gridworld environment. Checking the learning curves, one can see they actually converged to lower score than when the models started (from close to -1 down to -1.3). Can the authors comment more on why this is the case? The authors mentioned 'the agents learn only to navigate around the map for 25-50 steps before an episode ends'. I could not digest this sentence and would hope to understand better. To me, this gridworld level is quite trivial, the agent decide which goal is closest to the agent, and then move forward to that one and then onto next goal sequentially. I would like to understand better why this is a good level to test model-based RL and why model-free RL should have a hard time.\\n\\n6. a few possible typos:\\n(1) formula 5, 3rd equation, should it be:\\n z*_{tau+1} = z_tau + z'' (double prime instead of single prime)?\\n\\n(2) The last sentence of the paragraph after equation (5)\\n z_{tau+1}^r = z_{tau=0} (tau+1 instead of tau) \\n\\n(3) the color indication in Figure 5 caption is wrong. (while the description is fine in the main text)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting paper\", \"review\": \"Quality: This paper proposes learning to plan approach that can learn to search with an inner agent; conditioned on the output of the inner agent (IA) the outer agent starts to learn a reactive policy in the environments. The inner agent, different from other searching agent, learns to decide what searching pattern to choose. The presented method shows better computation efficiency than competitive baselines. The main applications are on \\\"box pushing\\\" game and grid-world navigation.\", \"clarity\": \"The paper is well-written.\", \"originality\": \"The paper is original.\", \"significance\": \"This paper shows a promising method to combine traditional search method with machine learning techniques and therefore boost sample-efficiency of RL method.\", \"cons\": \"1. The dynamics model used to plan is given and fully observable. That means a pure Monte-Carlo tree search can achieve very high accuracy. In the figure 6, AtreeC can also have good performance after 4e7 steps, even better than the proposed method. I am wondering what would happen if 4e7 steps were applied to the proposed method.\\n2. One argument from the paper is that their method is computationally efficient. However, this should be presented in a more realistic test environment. In the push and gridworld environment, 84 steps of planning wouldn't be too bad. So a demonstration of the effectiveness from the proposed method on a visually complex game would be great.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Paper title update\", \"comment\": \"The paper title has been changed to \\u201cDynamic Planning Networks\\u201d as the original name is overly generally.\"}"
]
} |
|
HkfwpiA9KX | Automata Guided Skill Composition | [
"Xiao Li",
"Yao Ma",
"Calin Belta"
] | Skills learned through (deep) reinforcement learning often generalizes poorly
across tasks and re-training is necessary when presented with a new task. We
present a framework that combines techniques in formal methods with reinforcement
learning (RL) that allows for the convenient specification of complex temporal
dependent tasks with logical expressions and construction of new skills from existing
ones with no additional exploration. We provide theoretical results for our
composition technique and evaluate on a simple grid world simulation as well as
a robotic manipulation task. | [
"Skill composition",
"temporal logic",
"finite state automata"
] | https://openreview.net/pdf?id=HkfwpiA9KX | https://openreview.net/forum?id=HkfwpiA9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJgNUg4wl4",
"BkeSGjpelE",
"BJx8nZ85JN",
"Sylz7XQ5CQ",
"H1gEQFfc0m",
"BJxpVMfqCQ",
"SylO40-9RQ",
"BJxCCT-90X",
"BylKx8NIpX",
"SygGIu4n2X",
"ryeWe3Dc3Q"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545187403603,
1544768269328,
1544343982026,
1543283482270,
1543280924477,
1543279157169,
1543278127880,
1543278038232,
1541977584644,
1541322826464,
1541204968931
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper812/Authors"
],
[
"ICLR.cc/2019/Conference/Paper812/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper812/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper812/Authors"
],
[
"ICLR.cc/2019/Conference/Paper812/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper812/Authors"
],
[
"ICLR.cc/2019/Conference/Paper812/Authors"
],
[
"ICLR.cc/2019/Conference/Paper812/Authors"
],
[
"ICLR.cc/2019/Conference/Paper812/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper812/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper812/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thanks for the followup\", \"comment\": \"Thank you for the additional comments and adjustment to the score. We acknowledge that comparison with a good number of state-of-the-art methods would better situate our work in the field. Our work presented here is a combination of both reward engineering (using TL) and skill composition, along with the hierarchical policy structure that arises naturally with the framework. It is difficult to find other work with a similar combination. Therefore, an elaborate and fair comparison with other methods would be a contribution in it self which we will consider in the future. As for the high variance on the right side of figure 5, it mostly depends on how the task is initialized at each episode (initialization is random). Some initialization makes it easier for the task to be accomplished than others. As long as the episode length is consistently below the max value, the agent is always able to complete the task. As the reviewer mentioned, a more thorough analysis will be helpful which we will try to incorporate in future work.\"}",
"{\"metareview\": \"The authors present an interesting approach for combining finite state automata to compose new policies using temporal logic. The reviewers found this contribution interesting but had several questions that suggests that the current paper presentation could be significantly clarified and situated with respect to other literature. Given the strong pool of papers, this paper was borderline and the authors are encouraged to revise their paper to address the reviewers\\u2019 feedback.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Interesting combination of temporal logic for constructing new RL policies, presentation should be clearer\"}",
"{\"title\": \"Appreciate Improvements\", \"comment\": \"I'd like to say that I appreciate the improvements to the paper and have updated my previous rating accordingly. I'm still not totally convinced that the other methods I mentioned aren't relevant and also now I have some\\nmild concerns about the high variance in the std reported, making it difficult to assess real performance gains are real or not (right side of figure 5). I would also liked to have seen the additional analysis mentioned in this post.\"}",
"{\"title\": \"Response for reviewer 2\", \"comment\": \"Thank you for your comments. The dimensional explosion of automaton states when composing many policies and its effect on composition is an interesting and practical problem worth looking into. Thank you also for catching the typos, we have incorporated the modifications in the updated paper.\"}",
"{\"title\": \"Nice paper that combines RL and constraints expressed by logical formulas\", \"review\": \"The contribution of the paper is to set up an automaton from scTLTL formulas, then corresponding MDP that satisfies the formulas is obtained by augmenting the state space with the automaton state and zeroing out transitions that do not satisfy the formula. This approach seems really useful for establishing safety properties or ensuring that constraints are satisfied, and it is a really nice algorithmic framework. The RL algorithm for solving the problem is entropy-regularized MDPs. The approach \\u201cstitches\\u201d policies using AND and OR operators, obtaining the overall optimal policy over the aggregate. Proofs just follow definitions, so they are straightforward, but I think this is a quality. The approach is quite appealing because it provides composition automatically. The paper is very well written. The main problem I see with the work is that composition can explode the number of states in the new automaton and hence the new MDP. It would be interesting in future work to do \\u201csoft\\u201d ruling out of transitions rather than the \\\"hard\\\" approach used in the paper. The manipulation task provided is quite appealing, as the robot arm is of high dimensionality but the FSAs obtainedare discrete. Overall, the paper provides a very good contribution.\", \"small_comments\": \"Equation equation in Def 3 also proof of Theorem 2\\nIn section, -> In this section\\nare it has -> and it has\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"response for reviewer 4\", \"comment\": \"Thank you for your comments and providing additional references. We try to address your questions as follows:\\n\\n1. \\u201csituating this work more clearly against existing similar works which use logic in this way, ..., and comparing/contrasting against (or at least discussing differences with) methods with similar motivations (e.g., HRL multi-task learning, meta-learning) to emphasize the need/importance of this work\\u201d\\n\\nTo the best of our knowledge, the presented work is the first to use techniques in formal methods to simultaneously address optimal -AND- and -OR- task compositions and demonstrate the process in tasks with continuous state and action spaces. We make the distinction between skill composition and multi-task learning/meta-learning (such as MAML) where the latter often requires a predefined set of tasks/task distributions to learn and generalize from, whereas the focus of the former is to construct new policies from a library of already learned policies that achieve new tasks (often some combination of the constituent tasks) with little to no additional constraints on task distribution at learning time. Our focus here is on task composition and therefore did not compare with multi-task / meta learning methods. HRL is also not the focus here, it so happens that incorporating FSA into the MDP gives the resulting policy a hierarchical representation. Therefore, we chose mainly to contrast against other skill composition methods in Section 2. We have made this more clear in the updated paper.\\n\\t\\n 2. \\\"Citations that should likely be made ...\\\"\\n\\nThe set of provided references aim to solve the non-Markovian reward decision process (NMRD) using temporal logic and automaton. The idea is similar to that of the FSA augmented MDP that we adopted with some differences (such as the requirement to manually define a set of rewards in additional to the logic specification, separation of state features and temporal goals, etc). However, the comparison is mainly between the above references and the FSA augmented MDP (Li et al., 2018) which is not the contribution of our work. \\n\\n3. \\u201cDoes Figure 5 show the averaged return over 5 runs, sum of discounted rewards averaged over 5 episodes per update step ...\\u201d\\n\\nThe original Figure 5 shows the undiscounted episodic return (sum of undiscounted rewards over one episode) averaged over 5 evaluation episodes (without updating the policy in between). We have updated this result to be discounted return with standard deviations.\\t\\n\\n4. \\u201cWhat were the standard deviations for this across experiments? Even with averaging it seems that these runs are very high variance, would be good to understand what variance bounds to expect if using this method.\\u201d\\n\\nWe have included the standard deviation in the learning curve. To our current understanding, the variance comes from two sources. The first is randomization of the environment configurations - some configurations make the task considerably easier to accomplish than others. The second is randomization over the automaton states at initialization. Some q states as easier to learn than others (for example $q_2$ compared to $q_1$ in Figure 4b). At each initialization, if a difficult q state is on the agent\\u2019s path of reaching $q_f$, the agent may get stuck in that state receiving a low episodic return whereas in other episodes the agent may not have to deal with this state at all. \\n\\n5. \\u201cWhy were average discounted returns reported in Figure 5 and not in Table 1?\\u201d\\n\\nOriginally, Table 1 aims to report the performance of the learned policies in terms of task success rate whereas Figure 5 reports learning progress in terms of returns. We have updated Table 1 to include the average discounted returns.\\n\\n6. \\u201cWhat were the standard deviations on success rate and training time? Also what about sample complexity?\\u201d\\n\\nWe have added the standard deviations to Table 1. We don't currently have a quantitative analysis on sample complexity other than the learning curves. Hopefully we will perform such analysis in the future.\\n\\n7. \\u201cIt is also unclear whether the presented results in Table 1 and Figure 5 are on the real robot or in simulation. The main text says, \\u201cAll of our training is performed in simulation and the policy is able to transfer to the real robot without further fine-tuning.\\u201d So does this mean that Figure 5 is simulated results and Table 1 is on the real robot?\\u201d\\n\\nThis is correct, training is in simulation and evaluation is on the real robot. We have modified the text to make this clear in the paper.\\n\\nThank you also for catching the typos and suggesting grammar edits, those have been incorporated in the updated paper. We have also updated the experiment and results section.\"}",
"{\"title\": \"response for reviewer 3\", \"comment\": \"Thank you for your comments. The following are our attempts to address your concerns:\\n\\n1. \\u201cWill this method work on composing scTLTL formula with temporal operators other than disjunction and conjunction?\\u201d\\n\\nNot directly, however, if we have learned a policy for \\u201ceventually A\\u201d and a policy for \\u201ceventually B\\u201d where \\u201cA\\u201d and \\u201cB\\u201d are predicates, then it is possible to compose policies that satisfy any given scTLTL formula consisting of and only of \\u201cA\\u201d, \\u201cB\\u201d, \\u201cnot A\\u201d and \\u201cnot B\\u201d. This is an extension that we are working on.\\n\\n\\n2. \\u201cCan this approach deal with continuous state space and actions? This paper describes a discretization way, which, however, can introduce inaccuracies.\\u201d \\n\\nOur method is able to learn with continuous state and action spaces as is shown in the robotic experiment. The only discrete state here is the automata state which corresponds to decompositions of the high level task.\\n\\n3. \\u201cThe design of the skills is by hand, which restricts badly its usability.\\u201d\\n\\nThe only hand-designed component is the scTLTL formula that specifies the task. This corresponds to the reward function that needs to be provided for most reinforcement learning algorithms. \\n\\n4. \\u201cThe experiments results show that the composition method does better than soft Q-learning on composing learned policies, but how it performed compared to earlier hierarchical reinforcement learning algorithms? \\u201c\\n\\nThe FSA augmented MDP provides a natural hierarchy regardless of the RL algorithm used. Even using plain SQL results in a hierarchical policy. The reason we did not compare our method with other RL algorithms on a regular MDP is that it is difficult to specify a complex task using a non-temporal logic reward function. In our experience, if enough effort is put into reward design, we will end up with something very similar to the robustness of the original scTLTL formula, anything less will result in a faulty reward that makes the comparison less meaningful. Again, the focus of this work is more on the effective composition of learned skills and less on actually learning a skill.\\n\\nWe have incorporated a summary of our algorithm in Section 5 and also updated the experiment and results section with more information.\"}",
"{\"title\": \"response for reviewer 1\", \"comment\": \"Thank you for your comments, We try to address your questions as follows:\\n\\n1. \\u201cThe experiments demonstrate that this method can outperform SQL at skill composition. However, it is unclear how much prior knowledge is used to define the automaton. If prior knowledge is used to construct the FSA, then a missing comparison would be to first find the optimal path through the FSA and then optimize a controller to accomplish it. As the paper is not very clear, that might be the method in the paper. \\u201d\\n\\t\\nIn this work, all prior knowledge is encoded in the scTLTL formula which effectively acts\\nas a \\u201creward function\\u201d. As mentioned in the end of Section 3.2, the automata is automatically generated from the scTLTL formula without taking additional information. Implicitly, learning with the FSA augmented MDP simultaneously finds a path in the FSA and the corresponding controller that leads the system towards the satisfying q-state. SQL and skill composition have access to the same amount of prior information.\\n\\n2. \\u201cHow do you obtain the number of automaton states? \\u201d\\n\\nThe automaton states are also automatically generated with off-the-shelf libraries. The translation from temporal logic formula to automaton is a topic in its own.\\n\\n3. \\u201cIn Figure 1, are the state transitions learned or handcoded? Are they part of the policy's action space?\\u201d\\n\\nState transitions are automatically generated with the FSA, they are not part of the action space. The states of the FSA (q states) are part of the state space and the transitions are augmented with the MDP\\u2019s transitions (definition 3).\\n\\n4. \\u201cIn section 3.2, you state s_{t:t+k} |= f(s)<c \\u21d4 f(s_t)<c What does s without a timestep subscript refer to? Why does this statement hold?\\u201d\\n\\nThis statement is a definition. It says that trajectory s_{t:t+k} satisfies predicate f(s) < c if and only if the first state of the trajectory (s_t) satisfies the predicate. For example, if the predicate is f(s) = 2*s+1, c =5, then a trajectory {s_0=0, s_1=7, s_2=8} satisfies the predicate f(s) < c because f(s_0) < c while trajectory {s_0=7, s_1=0, s_2=-1} does not satisfy. \\n\\n5. \\u201cCan you specify more clearly what you assume known in the experiments? What is learned in the automata? In Figure 5, does SQL have access to the same information as Automata Guided Composition?\\u201d \\n\\t\\nLearning follows the same procedure as regular reinforcement learning. We design the scTLTL formula as task specification and we know the state and action spaces. The automata is embedded into the MDP using Definition 3. In Figure 5, SQL is used to learn a FSA augmented MDP and therefore has access to the same information.\\n\\nWe have added a summary of our algorithm in Section 5 and updated the experiment and results section with more information and clarity.\"}",
"{\"title\": \"Review\", \"review\": [\"This work proposed using temporal logic formulas to augment RL learning via the composition of previously learned skills. This work was very difficult to follow, so it is somewhat unclear what were the main contributions (since much of this seems to be covered by other works as referenced within the paper and as related to similar unreferenced works below). Moreover, regarding the experiments, many things were unclear (some of the issues are outlined below). While the overall idea of using logic in this way to help with skill composition is interesting and exciting, I believe several things must be addressed with this work. This includes: situating this work more clearly against existing similar works which use logic in this way, clearly defining the novel contributions of this work as compared to those and others, overall making the methodology more clear and specific (including experimental methodology), and comparing/contrasting against (or at least discussing differences with) methods with similar motivations (e.g., HRL multi-task learning, meta-learning) to emphasize the need/importance of this work \\u2014 I am aware that at least 1 HRL work is mentioned, but this work is not really contrasted against it to help situate it.\", \"Questions/Concerns about Experiments:\", \"Does Figure 5 show the averaged return over 5 runs, sum of discounted rewards averaged over 5 episodes per update step, or 5 episodes, each from a separate run averaged together? It is a bit unclear especially because the main text and the figure caption slightly differ. Also, average discounted return is somewhat different than average return, suggest updating the label to be clear also with the discount factor used.\", \"What were the standard deviations for this across experiments? Even with averaging it seems that these runs are very high variance, would be good to understand what variance bounds to expect if using this method.\", \"Why were average discounted returns reported in Figure 5 and not in Table 1?\", \"What were the standard deviations on success rate and training time? Also what about sample complexity?\", \"To my understanding the benefit here is reusability of learned skills via the automata methods described here. It would have made sense to compare against other HRL or multi-task learning methods in addition to just SQL or learning from scratch. For example how would MAML compare to this?\", \"It is also unclear whether the presented results in Table 1 and Figure 5 are on the real robot or in simulation. The main text says, \\u201cAll of our training is performed in simulation and the policy is able to transfer to the real robot without further fine-tuning.\\u201d So does this mean that Figure 5 is simulated results and Table 1 is on the real robot?\"], \"citations_that_should_likely_be_made\": \"+ Giuseppe, Luca Iocchi, Marco Favorito, and Fabio Patrizi. \\\"Reinforcement Learning for LTLf/LDLf Goals.\\\"\\u00a0arXiv preprint arXiv:1807.06333\\u00a0(2018).\\u00a0\\n+ Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. \\\"Decision-making with non-markovian rewards: From LTL to automata-based reward shaping.\\\"\\u00a0 In\\u00a0Proceedings of the Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM), pp. 279-283. 2017. \\n+ Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. \\\"Non-Markovian Rewards Expressed in LTL: Guiding Search Via Reward Shaping.\\\" In Proceedings of the Tenth International Symposium on Combinatorial Search (SoCS), pp. 159-160. 2017.\\u00a0\\n\\n\\nTypos/Suggested grammar edits:\\n\\n\\u201cSkills learned through (deep) reinforcement learning often generalizes poorly across tasks and re-training is necessary when presented with a new task.\\u201d \\u2014> Often generalize poorly\\n\\n\\u201cWe present a framework that combines techniques in formal methods with reinforcement learning (RL) that allows for convenient specification of complex temporal dependent tasks with logical expressions and construction of new skills from existing ones with no additional exploration.\\u201d \\u2014> Sentence kind of difficult to parse and is a run-on\\n\\n\\u201cPolicies learned using reinforcement learning aim to maximize the given reward function and is often difficult to transfer to other problem domains.\\u201d \\u2014> ..and are often..\\n\\n\\u201cby authors of (Todorov, 2009) and (Da Silva et al., 2009)\\u201d \\u2014> by Todorov (2009) and Da Silva et al. (2009) Also several other places where you can use \\\\citet instead of \\\\cite\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting topic but little technique contribution\", \"review\": \"This paper mainly focuses on combining RL tasks with linear temporal logic formulas and proposed a method that helps to construct policy from learned subtasks. This method provides a structured solution for reusing learned skills (with scTLTL formulas), and can also help when new skills need to be involved in original tasks. The topic of the composition of skills is interesting. However, the joining of LTL and RL has been developed previously. The main contribution of this work is limited to the application of the previous techniques.\\n\\nThe proposed approach also has some limitations. \\nWill this method work on composing scTLTL formula with temporal operators other than disjunction and conjunction?\\nCan this approach deal with continuous state space and actions? This paper describes a discretization way, which, however, can introduce inaccuracies. \\nThe design of the skills is by hand, which restricts badly its usability.\\nThe experiments results show that the composition method does better than soft Q-learning on composing learned policies, but how it performed compared to earlier hierarchical reinforcement learning algorithms?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"More explanations are needed\", \"review\": \"This paper presents a way use using FSA-augmented MDPs to perform AND and OR of learned policies. This idea is motivated by the desirability of compositional policies. I find the idea compelling, but I am not sure the proposed method is a useful solution. Overall, the description of the method is difficult to follow. With more explanations (perhaps an algorithm box?), I would consider increasing my score.\\n\\nThe experiments demonstrate that this method can outperform SQL at skill composition. However, it is unclear how much prior knowledge is used to define the automaton. If prior knowledge is used to construct the FSA, then a missing comparison would be to first find the optimal path through the FSA and then optimize a controller to accomplish it. As the paper is not very clear, that might be the method in the paper.\", \"questions\": [\"How do you obtain the number of automaton states?\", \"In Figure 1, are the state transitions learned or handcoded? Are they part of the policy's action space?\", \"In section 3.2, you state s_{t:t+k} |= f(s)<c \\u21d4 f(s_t)<c What does s without a timestep subscript refer to? Why does this statement hold?\", \"Can you specify more clearly what you assume known in the experiments? What is learned in the automata? In Figure 5, does SQL have access to the same information as Automata Guided Composition?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
SkMwpiR9Y7 | Measuring and regularizing networks in function space | [
"Ari Benjamin",
"David Rolnick",
"Konrad Kording"
] | To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs. Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested. Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L^2$ Hilbert space. We examine how typical networks behave in this space, and compare how parameter $\ell^2$ distances compare to function $L^2$ distances between various points of an optimization trajectory. We find that the two distances are nontrivially related. In particular, the $L^2/\ell^2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus. We then investigate how the $L^2$ distance could be applied directly to optimization. We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks. Secondly, we propose a new learning rule that constrains the distance a network can travel through $L^2$-space in any one update. This allows new examples to be learned in a way that minimally interferes with what has previously been learned. These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature. | [
"function space",
"Hilbert space",
"empirical characterization",
"multitask learning",
"catastrophic forgetting",
"optimization",
"natural gradient"
] | https://openreview.net/pdf?id=SkMwpiR9Y7 | https://openreview.net/forum?id=SkMwpiR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"gyz-Kp6wv",
"Skgd-0eIlN",
"rkewpFkoyE",
"HyeDAbOQCX",
"ByeOl39lAQ",
"HkeQ3Ocl0Q",
"S1eC1wclCm",
"Sylzu7J6nX",
"HJlmxzl52X",
"SJxn7ibY3X"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1579019014323,
1545108992119,
1544382910889,
1542844879476,
1542659055843,
1542658219327,
1542657766032,
1541366634151,
1541173738855,
1541114660379
],
"note_signatures": [
[
"~Frederik_Benzing1"
],
[
"ICLR.cc/2019/Conference/Paper811/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper811/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper811/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper811/Authors"
],
[
"ICLR.cc/2019/Conference/Paper811/Authors"
],
[
"ICLR.cc/2019/Conference/Paper811/Authors"
],
[
"ICLR.cc/2019/Conference/Paper811/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper811/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper811/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for providing this very interesting perspective on regularisation in both the context of continual learning and optimisation, I very much enjoyed the read and hope to come back to it in the future.\\n\\nYou point out that you are not aware of direct precedence in the [continual learning] literature for your algorithm - but I think there might be some. \\nIf I understand your approach correctly (the approach given the last displayed equation on p.5), it results in continual learning algorithm which is very similar to `Learning without Forgettig' (https://arxiv.org/abs/1606.09282, 2016). Again, if i don't misunderstand something, the only difference between the approaches seems to be which data is used to regularise the net (data from new task for your approach and data from old task for LwF).\\nLwF in turn could be interpreted as a form knowledge distillation (https://arxiv.org/abs/1503.02531). \\n\\n\\n\\nOn a slightly different note, your baseline experiment 'Adam+retrain' seems to be very relevant from a continual learning viewpoint. In [https://arxiv.org/pdf/1902.10486.pdf] it is suggested that this baseline can be very strong if set up carefully. Their experiments indicate that storing 15 examples per class (which roughly corresponds to the size of your cache), this approach outperforms both SI and EWC by a large margin. While your setup is slightly different from theirs, it might interesting to see how this version of Adam+retrain performs.\", \"title\": \"Related Continual Learning Literature\"}",
"{\"metareview\": \"This paper proposes to regularize neural network in function space rather than in parameter space, a proposal which makes sense and is also different than the natural gradient approach.\\n\\nAfter discussion and considering the rebuttal, all reviewers argue for acceptance. The AC does agree that this direction of research is an important one for deep learning, and while the paper could benefit from revision and tightening the story (and stronger experiments); these do not preclude publishing in its current state.\", \"side_comment\": \"the visualization of neural networks in function space was done profusely when the effect of unsupervised pre-training on neural networks was investigated (among others). See e.g. Figure 7 in Erhan et al. AISTATS 2009 \\\"The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training\\\". This literature should be cited (and it seems that tSNE might be a more appropriate visualization techniques for non-linear functions than MDS).\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Borderline accept\"}",
"{\"title\": \"Answer to your rebuttal\", \"comment\": \"Thank you for your comments and the plots you added.\", \"re_re\": \"\\u201cBetter way of penalizing movement in the function space already exists (at least for probability distributions: Natural Gradient)\\\":\\n\\nI agree with your first point that you can measure distance between any two functions and that you don\\u2019t suffer from the local validity of Natural Gradient. I do think however it is only valid in the case of catastrophic forgetting, not in the case of neural network training, since we need to make relatively small steps anyways, and thus we don\\u2019t break the locality assumption.\", \"about_the_kl_divergence_and_natural_gradient\": \"I agree Natural Gradient might not be ideal in all cases (and doesn\\u2019t work if the network does not output distributions). But one could also, for instance, replace the Fisher by the Gauss-Newton approximation of the Hessian to approximate the distance measure between the two functions.\\n\\n\\nWith that said, I am willing to raise my evaluation to a 6. Like R3, I find the idea novel and really interesting, but the paper should be refocused a bit. I understand the goal is to show that this method could be applied to a wide range of setups, but then it fails to be very convincing on those setups.\"}",
"{\"title\": \"Thanks, some more comments\", \"comment\": \"1) Ok, I withdraw the synthetic experiments as a negative point against the paper.\\n\\n2) I'm not so sure about this one. While I accept the synthetic catastrophic forgetting experiment as an established (although perhaps quickly expiring) setup, I'm not sure about the RNN setting. To make a broad claim (\\\"Better performance than ADAM at training recurrent neural networks. This is a new result.\\\"), one needs to establish it on a broad set of setups (NLP tasks, time series, etc.). Otherwise, I don't view this as a significant result, but simply a curious result that might be significant.\\n\\n3) Yes, I do agree with this point. And upon reflection, this result is more significant than I previously realized.\\n\\n\\\"In the title, however, we played around with several options and thought that among them this was the best balance of precision and clarity.\\\"\\nI actually think the current title overly maximizes recall at the expense of precision, and thus is not clear. The title implies a broad claim on function regularization which is not reflected in the paper. \\n\\nConventional gradient descent is actually a special case of Mirror Descent with a Euclidean proximity term. See:\\n-- http://www.princeton.edu/~yc5/ele538_optimization/lectures/mirror_descent.pdf\\n-- https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37013.pdf\\nWhat this submission is essentially proposing is a proximity term in function space. The authors decide to invoke the term Hilbert space, and in doing so I was expecting some deeper treatment of the topic.\\n\\nRegarding \\\\mu: One can remove the first two equations in Section 2 and not lose anything in the exposition. Just start with the third equation. The paper doesn't use the measure-theoretic notation \\\\mu in any interesting way, it just adds more notation. \\\\mu appears nowhere else in the paper.\\n\\n\\nIn summary, I am willing to raise my rating to 6 (weak accept).\"}",
"{\"title\": \"We have made some edits to address these concerns\", \"comment\": \"We are grateful for the close reading and helpful review.\\n \\nIt is true that we pursued many directions in this work, and that these results somewhat compete for space. Our high-level goal was to establish (and disseminate) that 1) yes, there are other measures of function space besides the Fisher metric, 2) the L2 distance is actually reasonable to estimate, and 3) this could have many direct applications. We could have more aggressively documented each application, but we thought this would distract from the overall message.\\n \\nWith that said, we present three specific empirical results that we believe are quite significant.\\n \\n1) Near state-of-the-art results on catastrophic forgetting at considerable less computational cost.\\n\\n\\nTo this point, there is a significant advantage of our method that we did not previously note. Unlike the benchmark methods of SI and EWC, our method does not require knowledge of task boundaries. SI and EWC both require this input, but in many real-world applications this is not available, or tasks shift continuously. We have now noted this in the text.\\n\\nThe reviewer noted that our task is somewhat synthetic. While this is true, it was the same task as was presented in the papers of both of our benchmark methods.\\n\\n \\n2) Better performance than ADAM at training recurrent neural networks. This is a new result.\\n\\nPreviously, the HCGD algorithm adjusted the updates proposed by SGD, and we showed that on the sequential MNIST task that this adjustment outperformed SGD. We have introduced a new variant of the HCGD algorithm that adjusts the updates proposed instead by ADAM. We document that this scheme improves upon ADAM on the sequential MNIST task.\\n\\n3) The empirical study relating L2 distances to l2 distances. It is surprising to us that this has not been done before, given how often methods are designed to operate on parameters. These figures communicate a compelling finding that could be quickly digested but that could nevertheless change how researchers would design a new learning rule, or trust theoretical results relying on Lipschitz bounds.\\n \\nWe believe that these findings are significant, and that they belong together.\", \"responses_to_individual_comments\": \"\\u2018I think the major issue with clarity is the title. The authors use the term \\\"regularizing\\\" in a fairly narrow sense, in particular regularizing the training trajectory to be stable in function space. However, the more dominant usage for regularizing is to regularize the final learned function to some prior, which is not studied or even really discussed in the paper.\\u201d\\n\\nIt is true that our main algorithm is more correctly a form of trajectory constraint that than imposing a prior. For this reason our algorithm is called \\u201cHilbert-constrained gradient descent\\u201d rather than, say, \\u201cHilbert-regularized\\u201d. Still, there were a few points that we used \\u201cregularization\\u201d in this broad sense, and we have gone through and edited these for language. In the title, however, we played around with several options and thought that among them this was the best balance of precision and clarity.\\n\\n \\n\\u201cThe use of \\\\mu is a bit disconnected from the rest of the notation.\\u201d\\nWe don\\u2019t observe any typos in this section, and this is standard notation to denote a probability measure. We are not sure we understand this comment, though, and welcome any clarification.\\n\\n \\u201cComputing the empirical L2 distance accurately can also be NP hard. There's no stated guarantee of how large N needs to be to have a good empirical estimate. Figure 3 is nice, but I think a more thorough discussion on this point could be useful.\\u201d\\n\\nThis is true; we realize now that the text implied that our the convergence scales less than exponentially with N when this might not be the case. Unfortunately, precise guarantees of convergence will depend both on the data distribution and also upon the network. This is why we took an empirical approach. We have removed the implied claim that arbitrarily precise estimates are not NP-hard. \\n\\n\\u201cL2-Space was never formally defined.\\u201d\\nAt the moment we define the space at the very start of Section 2 by writing the norm that defines the space. It is true that Hilbert spaces are defined by the inner product, but since this is a standard function space we thought the inner product would be apparent from the norm. We have updated the manuscript such that we mention the inner product, as well.\\n\\n\\u201cSection 2.1 isn't explained clearly. For instance, in the last paragraph, the first sentence states \\\"the networks are initialized at very different point\\\", and halfway into the paragraph a sentence states \\\"all three initializations begin at approximately the same point in function space.\\\". The upshot is that Figure 1 doesn't crisply capture the intuition the authors aim to convey\\u201d\\n\\nWe meant different points in parameter space, which correspond to a similar point in function space. We have edited this paragraph to be more clear about its overall relevance.\"}",
"{\"title\": \"We have added new analyses on the L2/l1 ratio and tested whether the L2 distance is indeed decreased\", \"comment\": \"We are grateful for the close reading and helpful review. We like your Pros and Cons list, and would like to respond to your two Cons.\", \"re\": \"\\u201cBetter way of penalizing movement in the function space already exists (at least for probability distributions: Natural Gradient)\\\":\\n \\nThe natural gradient has its strengths, but we disagree that it is universally better. Our method has a few strengths over the natural gradient. First, it can be generalized to regularize the change in function space between any two arbitrary functions, while the natural gradient is set to regularize with respect to only local changes between updates. This is because the 2nd order Taylor expansion of the KL divergence is only valid locally. We exploited this advantage in our catastrophic forgetting section, and used the distance to regularize the functional change between tasks.\\n \\nThe KL divergence also has different properties than the L2 norm, and is not the better choice in all circumstances. If the distributions of two networks are nonoverlapping, the KL divergence is infinite. Imagine, for example, that each output distribution is zero everywhere but a single line, and that the lines of the two distributions are parallel but separated. In this case the L2 norm is well-defined and gets smaller if you pull the lines closer to one another. The KL divergence is simply infinite until the lines overlap, at which point it becomes 0. This behavior is not likely to emerge in the natural gradient setting when the two networks have necessarily very close distributions, but in other settings (like the forgetting task, or otherwise when comparing far distributions like between that of a GAN\\u2019s output and real images) the L2 norm will be better.\", \"detailed_comments\": \"1. We have run new experiments to examine how BN and WD affect the L2/l2 ratio. These now appear in Appendix A.3-5. As predicted, BN and WD both have strong effects on how the L2/l2 ratio changes throughout learning. However, their omission seems to actually exacerbate the problem, and the L2/l2 ratio still changes considerably. We discuss the changes in figure captions, and point out in the main text that we have run these controls.\\n\\n2.\", \"regarding_the_approximation_quality\": \"a. The quality of the distance measure approximation depends on the number of validation examples of its empirical estimator, rather than the number of gradient steps. Even as formulated it can be arbitrarily accurate if one uses many examples.\\nb. The new Appendix figure measures this to make sure that the distance is indeed decreased.\", \"regarding_the_comparison_between_hcgd_and_the_natural_gradient\": \"a. We have updated the sequential-MNIST task with a version of HCGD that bootstraps ADAM, rather than SGD. (Rather than taking a L2-regularizing step after an SGD step, we now take it after an ADAM step). Just as the first version outperformed SGD, this new version outperforms ADAM.\\nb.. It often takes several refinements on a method before records are set. The natural gradient has been known for decades, and it is only recently with the additional modification of Kronecker factorization that it could be applied to large networks. Since ours is a fundamentally different approach to thinking about function space than the natural gradient, we actually consider this first attempt quite promising. We feel that it is important to get this work out so that a broader community can help think of potential improvements and modifications. Thus, we ask that this work be considered more as the initial introduction of a different approach, rather than a paper fine-tuning an established optimizer.\", \"minor_comments\": \"Thank you for pointing out these errors. We have addressed them in the draft.\"}",
"{\"title\": \"New experimental results and general responses\", \"comment\": \"We are grateful for the close reading and helpful review. We have made several changes to the paper in response.\\n \\nFirst, we have introduced a variant of the proposed algorithm that uses the Adam optimizer to take a proposed step, rather than a SGD step. We found that this outperforms standard Adam in training recurrent networks. In the sequential MNIST task, we had previously augmented only SGD with the L2 regularization, and saw that it boosted performance.\", \"regarding_section_3\": \"The method is indeed simple, but has no exact precedence in the literature that we are aware of, either. We have updated the manuscript to underline that this is a novel approach. It performs well, too; our results are very near the state-of-the-art method of Synaptic Intelligence, despite being significantly cheaper and less memory-intensive. (To even test SI on our 64GB box, in fact, we had to decrease the size of the network). There is also one significant advantage of our method that we did not at all emphasize in the paper: it does not require knowing \\u201ctask boundaries\\u201d, the moment when one task ends and another begins. Such knowledge is unavailable in many continual learning applications, including when tasks smoothly deform into one another instead of having sharp breaks. SI and EWC, the benchmark methods we compare to, both require this knowledge. We now emphasize this additional advantage in the paper.\\n \\nAs for section 2, we want to first emphasize that we designed this section to be of wider interest than just to motivate our later algorithm. We received feedback that this work would be relevant to theoreticians whose work depends on the relationship between parameters and output functions. This is a common situation; parameters are easy to analyze and change predictably, while the output function determines performance. Section 2 is meant to appeal to the community of neural network researchers interested in empirical characterizations. Aside from our metric, we are not aware of other feasible methods to calculate the distance between two networks\\u2019 functions that works globally. (Only local measures, like the Fisher metric, exist). This is why we were initially more interested in establishing that our measure of distance in function space is actually feasible to calculate, and why we devoted such space to evaluating the convergence properties of its empirical estimator. It is for these researchers that we analyzed the relationship between these two distances, rather than just for exploratory purposes setting up Section 3.\\n \\nAs you suggested, we investigated whether Figure 2 would change if the network were trained without weight decay. This now appears in Figure A.5. As you predicted, weight decay affects the angle L2/l2 in the third column; the no-WD network traverses larger distances in l2 space, and actually moves less in L2 space than the WD network. Weight decay affects the other columns, as well, and actually removes the negative correlation in the middle column. The L2/l2 ratio still changes considerable throughout training, which is in line with this paper's motivation to consider L2 distances directly. We also followed your other suggestion to present the figure when each epoch is averaged. This now appears as Figure A.3.\\n\\nThank you for the suggestion to discuss the natural gradient first, as motivation. An early draft of this work did indeed frame the work like this. However, we later realized from feedback that the empirical characterizations of section 2 were of wider interest than because of their relation to the natural gradient literature. Furthermore, there are many uses of the L2 distance besides a natural-gradient-esque algorithm (such as mitigating catastrophic forgetting). Since much of the paper is not directly inspired by or meant to replace the natural gradient, we have decided not to lead with that concept.\\n\\nLastly, thank you for the related reference. This paper penalizes the entropy of a network\\u2019s output distribution to reduce overconfident probabilities. It\\u2019s an interesting idea, and we cite it in the discussion.\"}",
"{\"title\": \"Experimental results are not convincing\", \"review\": \"Although, I liked the exploratory part of the paper I must admit that I found myself confused a few times. The results given in the paper suggest that the proposed HCGD does not demonstrate any advantages on CIFAR-10 and has a limited impact on seq. MNIST. I think that section 3.3 of the paper should be extended and demonstrate some more convincing results.\\nOverall, I am not certain about my assessment. Therefore, I set my confidence level to \\\"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\\\".\", \"update_on_17_nov\": \"Section 2. \\nI am not sure that the results shown in Figure 2 tell more answers than they pose new questions. \\nFor instance, \\\"In particular, the parameter distance between successive epochs is negatively correlated with the L^2 distance for most of optimization (Fig. 2b). The distance from initialization shows a clean and positive relationship, but the relationship changes during optimization\\\" \\nWould it be possible to have a supplementary figure with weight decay switched-off? I am not sure why you need it at all since the purpose is not to get state-of-the-art results. Could it also explain the angle for L^2/l^2 shown in the third column since weight decay is something that affects l^2? \\nI am not sure that the discussion of the negative correlation is sufficient. The actual correlation is linked to the stage of convergence, it would be nice to have a figure showing its average value per epoch (you say it is negative for the most part of optimization) and some discussion on its impact for the remaining part of your paper. \\n\\nSection 3.\\nI am not an expert in online learning, this is probably why I don't recognize the novelty of the proposed approach. Is it novel to train networks for new tasks while making the objective function accounting for the old tasks? It sounds like a definition of online learning of multiple tasks. Importantly, here it is done while keeping training data from the old tasks. I understand your arguments about storage, but I find it surprising that your proposed change to the objective function is novel. If it is the case, please emphasize it more and mention that despite its simplicity, this idea is very novel. Otherwise, please cite relevant papers where similar methods were used. \\n\\nI am not sure it is optimal to put Algorithm 1 in experimental results and applications. I don't see it as an application of your observations. I can imagine that the algorithm was inspired by your observations but it is your primary contribution and if possible should be discussed in a separate section. Here, you present it and then discuss how it is related to the natural gradient. \\nPlease consider an alternative presentation where you first discuss the natural gradient and its various related works and algorithms, then present your algorithm and then demonstrate your empirical observations. This presentation might contradict the timeline of the development of your approach but it might help to better connect your work to other works on the same topic. Also, it might help to better show novelties of your approach/observations. \\n\\nPlease comment if you find some interesting connection with [1].\\n\\n[1] \\\"Regularizing neural networks by penalizing confident output distributions\\\" https://arxiv.org/pdf/1701.06548.pdf\", \"update_on_nov_30\": \"I updated my score to 6 and my confidence level to 3.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice empirical motivations but weak proposed solution\", \"review\": \"Summary:\\nThis paper proposes first to measure distances, in a L2 space, between functions computed by neural networks. It then compares those distances with the parameter l2 distances of those networks, and empirically shows that the l2 parameter distance is a poor proxy for distances in the function space. Following those observations, the authors propose to use such constraint to combat catastrophic forgetting, and show some results on the permuted MNIST task. Finally, they propose the Hillber-constrained gradient descent (HCGD), a gradient descent algorithm that constraint movement in the function space, and evaluate it on a CNN (CIFAR10) and an LSTM (permuted MNIST).\", \"clarity\": \"The paper is well motivated, clearly written and easy to follow.\", \"novelty\": \"The idea of trying to move in the function space rather than in the parameter space is definitely not new (see the whole literature about Natural Gradient for instance). However, the proposed HCGD seems quite new, but unfortunately it doesn\\u2019t seem to perform well.\", \"pros_and_cons\": [\"The paper is well motivated, not only through the text but also with empirical evidence (section 2).\", \"The paper focuses on an important research direction in deep learning.\", \"This paper proposes a novel algorithm that penalizes movement in the function space.\", \"However, it is not clear if the proposed algorithm actually penalizes the distance in function space, since it is performing a crude approximation of the distance measure (using one step of gradient).\", \"Better way of penalizing movement in the function space already exists (at least for probability distributions: Natural Gradient)\"], \"detailed_comments\": \"1. Batch Normalization and Weight Decay:\\nI have mixed feelings about your experiments in section 2. Both Batch Normalization (BN) and Weight Decay (WD) have a regularization effect on the weights. I am wondering if the change in ratio L2/l2 during the course of training is simply caused by the regularization terms getting stronger and stronger (compared to the cross-entropy loss). Also, BN makes the function computed by the network independent of the scale (of each row) of the weight matrices. I do think that running again those experiments without BN and WD would make the argument that \\u201cthe parameter space is a proxy for function space\\u201d more robust. \\n2. About HCGD:\\nThe origins of the HCGD algorithm is extremely similar to the origins of Natural Gradient (NG) (just switch the L2 norm with the KL). The main difference resides in how the proximal formulation (equation 2) is approximated. For NG, one approximate the KL using a 2nd order Taylor expansion and then the proximal formulation is explicitly solved for Delta theta, where HCGD takes only a simple gradient step. It is thus not clear how well this step is indeed a good approximation of the distance in function space. For CNNs and LSTMs, K-FAC [1-2], which is a Natural Gradient approximation, has been shown to outperform ADAM, so the proposed approximation might not be good enough, as HCGD doesn't beat ADAM in the experimental setup. One experiment that would be nice to have is to do one update of the parameter in a neural network (using HCGD) and then measure how much you actually moved in the function space. \\n[1] Roger Grosse, James Martens, A Kronecker-factored Approximate Fisher Matrix for Convolution Layers, ICML 2016\\n[2] James Martens, Jimmy Ba, Matt Johnson,Kronecker-factored Curvature Approximations for Recurrent Neural Networks, ICLR 2018\", \"minor_comments\": \"Section 2.3: \\u201cone would require require\\u201d -> \\u201cone would require\\u201d\", \"figure_3\": \"\\u201cthat a set batch size\\u201d -> \\u201cthat a fixed batch size\\u201d\\nSection 3.1.1: \\u201cpermuted different on\\u201d -> \\u201cpermuted differently on\\u201d\\nSection 3.2.1: \\u201cthat minimizes equation 6\\u201d -> \\u201cthat minimizes equation 5\\u201d\", \"conclusion\": \"The paper proposes nice empirical evidence than parameter distance is not a good proxy for function distance. However, it is not clear if the proposed algorithm actually fixes this problem.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Core idea is interesting, but the follow-through is kind of scattered with weak results in too many directions.\", \"review\": \"This paper proposes a method for functional regularization for training neural nets, such that the sequence of neural nets during training is stable in function space. Specifically, the authors define a L2 norm (i.e., a Hilbert norm), which can be used to measure distances in this space between two functions. The authors argue that this can aid in preventing catastrophic forgetting, which is demonstrated in a synthetic multi-task variant of MNIST. The authors also show how to regularize the gradient updates to be conservative in function space in standard stochastic gradient style learning, but with rather inconclusive empirical results. The authors also draw upon a connection to the natural gradient.\\n\\n\\n***Clarity***\\n\\nThe paper is reasonably well written. I think the logical flow could be improved at places. I think the major issue with clarity is the title. The authors use the term \\\"regularizing\\\" in a fairly narrow sense, in particular regularizing the training trajectory to be stable in function space. However, the more dominant usage for regularizing is to regularize the final learned function to some prior, which is not studied or even really discussed in the paper.\", \"detailed_comments\": \"-- The notation in Section 2 could be cleaned up. The use of \\\\mu is a bit disconnected from the rest of the notation. \\n\\n-- Computing the empirical L2 distance accurately can also be NP hard. There's no stated guarantee of how large N needs to be to have a good empirical estimate. Figure 3 is nice, but I think a more thorough discussion on this point could be useful.\\n\\n-- L2-Space was never formally defined. \\n\\n-- Section 2.1 isn't explained clearly. For instance, in the last paragraph, the first sentence states \\\"the networks are initialized at very different point\\\", and halfway into the paragraph a sentence states \\\"all three initializations begin at approximately the same point in function space.\\\". The upshot is that Figure 1 doesn't crisply capture the intuition the authors aim to convey.\\n\\n\\n***Originality***\\n\\nStrictly speaking, the proposed formulation is novel as far as I am aware. However, the basic idea has been the air for a while. For instance, there are some related work in RL/IL on functional regularization:\\n-- https://arxiv.org/abs/1606.00968\\n\\nThe proposed formulation is, in some sense, the obvious thing to try (which is a good thing). The detailed connection to the natural gradient is nice. I do wish that the authors made stronger use of properties of a Hilbert space, as the usage of Hilbert spaces is fairly superficial. For instance, one can apply operators in a Hilbert space, or utilize an inner product. It just feels like there was a lost opportunity to really explore the implications.\\n\\n\\n***Significance***\\n\\nThis is the place where the contributions of this paper are most questionable. While the multi-task MNIST experiments are nice in demonstrating resilience against catastrophic forgetting, the experiments are pretty synthetic. What about a more \\\"real\\\" multi-task learning problem?\\n\\nMore broadly, it feels like this paper is suffering from a bit of an identity crisis. It uses regularizing in a narrow sense to generate conservative updates. It argues that this can help in catastrophic forgetting. It also shows how to employ this to construct the standard bounded-update gradient descent rules, although without much rigorous discussion for the implications. There are some nice empirical results on a synthetic multi-task learning task, and inconclusive results otherwise. There's a nice little discussion on the connection to the natural gradient. It argues that that this form of regularization lives in a Hilbert space, but the usage of a Hilbert space is fairly superficial. All in all, there are some nice pieces of work here and there, but it's all together neither here or there in terms of an overall contribution. \\n\\n\\n***Overall Quality***\\n\\nI think if the authors really pushed one of the angles to a more meaningful contribution, this paper would've been much stronger. As it stands, the paper just feels too scattered in its focus, without a truly compelling result, either theoretically or empirically.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BkgPajAcY7 | No Training Required: Exploring Random Encoders for Sentence Classification | [
"John Wieting",
"Douwe Kiela"
] | We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking at how much modern sentence embeddings gain over random methods---as it turns out, surprisingly little; and by 2) providing the field with more appropriate baselines going forward---which are, as it turns out, quite strong. We also make important observations about proper experimental protocol for sentence classification evaluation, together with recommendations for future research. | [
"sentence embeddings"
] | https://openreview.net/pdf?id=BkgPajAcY7 | https://openreview.net/forum?id=BkgPajAcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkll3LQJgE",
"rygDbIX90m",
"rJxtPTCtCX",
"r1eGVpCKRm",
"H1e7fT0K0X",
"HyxhXnAY0X",
"Syx4KoAYCQ",
"HygtcSqw6m",
"SyeP-oxUa7",
"r1glPhYWTX",
"SyxAjuu9h7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544660648043,
1543284222521,
1543265633207,
1543265577563,
1543265546989,
1543265316456,
1543265148068,
1542067600945,
1541962494743,
1541672024113,
1541208230094
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper810/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"ICLR.cc/2019/Conference/Paper810/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper810/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper810/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper810/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides a new family of untrained/randomly initialized sentence encoder baselines for a standard suite of NLP evaluation tasks, and shows that it does surprisingly well\\u2014very close to widely-used methods for some of the tasks. All three reviewers acknowledge that this is a substantial contribution, and none see any major errors or fatal flaws.\\n\\nOne reviewer had initially argued the experiments and discussion are not as thorough as would be typical for a strong paper. In particular, the results are focused on a single set of word embeddings and a narrow class of architectures. I'm sympathetic to this concern, but since there don't seem to be any outstanding concerns about the correctness of the paper, and since the other reviewers see the contribution as quite important, I recommend acceptance. [Update: This reviewer has since revised their review to make it more positive.]\\n\\n(As a nit, I'd ask the authors to ensure that the final version of the paper fits within the margins.)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Limited but worthwhile contribution\"}",
"{\"title\": \"There is some relation.\", \"comment\": \"There is some relation. LSH is collection of methods for dimension reduction and is often used for clustering. In contrast, we are increasing the dimension of embeddings in order to provide more features for downstream tasks.\"}",
"{\"title\": \"Thank you for your feedback and review!\", \"comment\": \"Thank you for your feedback and review! We have added standard deviations for the experiments.\\n\\nTrying high dimensional word embeddings is a very interesting idea. We did not have the time to implement this before the rebuttal deadline (we would like them to be trained on near the same amount of data as the released GloVe embeddings are), but do plan to try this out as soon as we can. Thank you for this idea! In Appendix E, we did experiment (and compare with BOREP and RandLSTM) with pooling 4096 dimensional random word embeddings. They seem to outperform the other models for the same dimensionality, which provides some evidence that large, trained embeddings could achieve strong performance. It would be interesting to see if BOREP, RandLSTM, and ESN improve as well with using large embeddings.\"}",
"{\"title\": \"Thank you for your review and comments! (2/2)\", \"comment\": \"Regarding initializing the biases, we don't see the weakness in initializing our biases as we do - we use the standard initialization procedure. If we initialize the forget gate bias as you suggest (from Jozefowicz et al. 2015), we might even get better results. However, the reason for initializing the biases in this way is because of gradient flow during optimization. Since we're not doing any training, it's not that relevant here.\\n\\nAs per your suggestion, we did update the analysis in the new version, thank you. All of these tasks have the same amount of training/validation/testing data and are balanced so the amount of training data has no effect on performance. It seems that random models do best for tasks requiring picking up on certain words: we can see which tasks these are by looking at how well BOREP does compared to the recurrent models (so WC, Tense, SubjNum, ObjNum are good candidates for this type of task). In these tasks, random models are all very competitive to the trained encoders. If one looks at the tasks where there is the largest difference between ESN and max(IS/ST-LN), which are SOMO, CoordInv, BShift, TopConst, it seems that these all have in common that they do require some sequential knowledge. We say this because the BOREP baseline lags behind the recurrent models significantly for many of these (especially when considering where the majority-vote baseline is) and it also makes sense that this is the case when one looks at the definitions of these tasks. This also makes intuitive sense, as this type of knowledge is much harder to learn and is not provided by just pure word embeddings, and so we'd expect the trained models to have an edge here, which seems to bear out in these experiments. We also added further analysis of various other questions in the appendix. We hope you find the updated and more detailed analysis more to your liking.\\n\\nWe have added confidence intervals in our latest version, and we changed W^r to W^h. Thanks for pointing these out.\"}",
"{\"title\": \"Thank you for your review and comments! (1/2)\", \"comment\": \"Thank you for your review and comments! We have incorporated your feedback into our latest draft.\\n\\nWe focused on recurrent architectures in this paper because that is the type of network used by the top performing models within in this evaluation framework. Therefore, by using a recurrent model, we capture the prior of these state-of-the-art models and this gives us a better understanding of how much these published approaches benefit from learning. Models like InferSent (Conneau et al. 2017), GenSen (Subramanian et al. 2018), SkipThought (Kiros et al. 2015), Dissent (Nie et al. 2017), Byte mLSTM (Radford et al. 2017), all use recurrent models. While there are some architectures in the literature that use CNNs (like Gan et al. (2016)) they are not among the current state-of-the-art. The point of this work is to provide baselines - which means CNNs and Transformers can be compared to our numbers, and should hopefully be able to beat them. \\n\\nAnother attractive reason for using recurrent networks is that they have very few hyperparameters to tune, in fact the only hyperparameter we varied was the hidden size in our experiments (and we detailed what size this was in our results). Architectures like CNNs or transformers require more design decisions and do not have a \\\"default architecture\\\" which leads to a lot more experimentation and tuning.\\n\\nRegarding using paragraph vector, it actually is a trained model and the results on these downstream tasks are not very competitive (see Hill et al. 2016 for the numbers). We'd be happy to include it in our results, but we don't think it would add to the message of the paper.\\n\\nWe do agree that sentence embeddings are more general than \\\"learned non-linear recurrent combinations\\\" and have changed this in the current iteration of the paper. Thanks for pointing this out!\\n\\nWe also agree that the comparison of ST-LN isn't quite as even as we would like, which is why we did make that note in our original submission that ST-LN could potentially be higher if they used GloVe embeddings. The problem with making this comparison is simply that reproducing ST-LN takes about a month of computation time. However, others have experimented with ST-LN with Glove embeddings. Results for this model are in https://arxiv.org/pdf/1707.06320.pdf for example, with an older evaluation setup more comparable to numbers in http://aclweb.org/anthology/Q16-1002. GenSen also experiments with a SkipThought model, and while not initialized with GloVe, they project from GloVe into their embedding space. They actually found this to work better than just using GloVe in their experiments (confirmed through correspondence with the authors). We do compare to the full GenSen model in the appendix, and their version that has just ST is included in their paper. It was one of their baselines which was handily beaten by their full model. So while a direct comparison is tricky, we can safely say that adding GloVe to ST-LN would not elevate the model to a level that would change the message of this paper.\"}",
"{\"title\": \"Thank you for all the feedback!\", \"comment\": \"Thank you for all the feedback!\\n\\nWe have softened the claims about the usefulness of some of the SentEval tasks for evaluation in our paper. We do think these tasks could be useful as evaluations in some situations, and strong performance should definitely be possible for very discriminative sentence embeddings. Our motivation for that take-away was also based on other observations of these tasks (like that they are too sentiment-focused or that they are nearly solved) that have been brought to light in other works.\\n\\nWe found initialization to matter somewhat in our experiments which is why we were very explicit about how we initialized in our submission, and we have since added some more analysis of this in the paper. In Appendix D, we compare six different initialization schemes (Heuristic (the one used in the paper currently, Uniform, Normal, Orthogonal, He (He et al. 2015), and Xavier (Glorot & Bengio, 2010). We found that BOREP is more robust to initialization than RandLSTM and prefers Orthogonal initialization. RandLSTM performs poorly with Normal initialization (and also Uniform but to a much lesser degree), and seems to perform best with He initialization.\\n\\nYour idea about using random word embeddings is very interesting! In fact, we added this experiment in the newest version of our paper. We included an analysis of completely random word embeddings along with the random architectures. Like in the initialization experiments, we experimented with six different methods to initialize both the word embeddings and the parameters of the architectures. The experiments are in Appendix E. We also experimented with pooling 4096 dimension embeddings (randomly sampled), which performs very well compared BOREP and RandLSTM. Overall, it really depends on the task how much the pre-trained embeddings help. However, they do seem to help more for tasks measuring semantic similarity (which makes sense since they can make use of unseen embeddings if both unseen embeddings are in the two sentences being compared). For some tasks like MRPC or SICK-E, the difference between using random word embeddings or pretrained ones is small (0.6 and 0.7 respectively), but for others like SST2 or MR it can be pretty large (10 and 8.8 points respectively). The average gain across tasks is 5.4 points.\\n\\nWe agree that we should have added some more analysis, and we have done so in the latest version of the draft. It's difficult to say what general knowledge IS and ST models have learned and how applicable it is for the downstream tasks. This is the motivation for probing tasks (Adi et al. 2017, Conneau et al. 2018) which help measure this to a degree and show that IS and ST are able to better capture sequential information. Random networks do about as well as these pretrained encoders on tasks that can be solved just based on word content. Therefore, if the downstream tasks rely mostly on word content (or perhaps that and a type of sequential information that is not learned by IS or ST), we would expect the difference between a random encoder and IS/ST to be small.\\n\\nThank you for your other critiques. We have addressed all of these in the newest version of the paper.\"}",
"{\"title\": \"Updated version\", \"comment\": \"To all reviewers:\\n\\nThank you so much for your feedback. We have made both minor (but important) and more substantial improvements to the paper due to the thoughtful feedback you have provided. These larger improvements include experiments with different random initializations for BOREP and RandLSTM; experiments for BOREP, BOE (300 dim), BOE (4096 dim), and RandLSTM with different initializations and random word embeddings; adding standard deviations to the experimental results (and also bolded the top numbers to make interpreting the tables easier); adding more analysis regarding the probing experiments; and we included a detailed analysis of how max pooling over padding affected reported results in various papers. We hope that you find the paper improved and a more interesting read. Thank you again for your comments.\"}",
"{\"comment\": \"Is there any relationship with Locality Sensitive Hashing?\", \"title\": \"Relation with LSH\"}",
"{\"title\": \"interesting investigation with worthwhile contribution; some suggested areas of improvement\", \"review\": [\"This paper is about exploring better baselines for sentence-vector representations through randomly initialized/untrained networks. I applaud the overall message of this paper that we need to evaluate our models more thoroughly and have better baselines. The experimentation is quite thorough and I like that you\", \"1) explored several different architectures\", \"2) varied the dimensionality of representations\", \"3) examine representations with probing tasks in the Analysis section.\", \"Main Critique\", \"In your takeaways you say that, \\u201cFor some of the benchmark datasets, differences between random and trained encoders are so small that it would probably be best not to use those tasks anymore.\\u201d I don\\u2019t think this follows from your results. Just because current trained encoders do not perform better than random encoders on these tasks doesn\\u2019t in itself mean these tasks aren\\u2019t good evaluation tasks. These tasks could be faulty for other reasons, but just because we have no better technique than random encoders currently, doesn\\u2019t make these evaluation tasks not worthwhile. Perhaps you could further examine what features (n-gram, etc.) it takes to do well on these tasks in order to argue that they shouldn\\u2019t be used.\", \"In your related work section you say that \\u201cWe show that a lot of information may be crammed into vectors using randomly parameterized combinations of pre-trained word embeddings: that is, most of the power in modern NLP systems is derived from having high-quality word embeddings, rather than from having better encoders.\\u201d Did you run experiments with randomly initialized embeddings? This paper (https://openreview.net/forum?id=ryeNPi0qKX) finds that representations from LSTMs with randomly initialized embeddings can perform quite well on some transfer tasks. I think in order to make such a claim about the power of high-quality word embeddings you should include numbers comparing them to randomly initialized embeddings.\", \"Questions\", \"Did you find that your results were sensitive to the initialization technique used for your random LSTMs / projections?\", \"Do you have a sense of why random non-linear features are able to perform well on these tasks? What kind of features are the skip-thought and InferSent representations learning if they do not perform much better? It\\u2019s interesting that many of the random encoder methods outperform the trained models on word content. I think you could discuss these Analysis section findings more.\", \"Other Critiques\", \"In the introduction, instead of simply describing what is commonly done to obtain and evaluate sentence embeddings, it would be better to include a sentence or two about the motivation for sentence embeddings at all.\", \"The first sentence, \\u201cSentence embeddings are learned non-linear recurrent combinations of pre-trained word embeddings\\u201d, doesn\\u2019t seem to be true as BOE representations are also sentence embeddings and CNNs/transformers could also work. \\u201cNon-linear\\u201d and \\u201crecurrent\\u201d are not inherent requirements for sentence embeddings, but just techniques that researchers commonly use.\", \"In the second paragraph of introduction instead saying \\u201cNatural language processing does not yet have a clear grasp on the relationship between word and sentence embeddings\\u2026\\u201d it might be better to say \\u201cNLP researchers\\u201d or the \\u201cNLP community\\u201d instead of \\u201cNLP\\u201d as a field doesn\\u2019t have a clear grasp.\", \"In the introduction: \\u201cIt is unclear how much sentence-encoding architectures improve over the raw word embeddings, and what aspect of such architectures is responsible for any improvement.\\u201d It would be also good to mention that it\\u2019s unclear how much the training task / procedure also is affects improvements.\", \"You could describe more about applications of reservoir computing in your related work section as it\\u2019s been used in NLP before.\", \"I don\\u2019t think you actually ever describe the type of data that InferSent is trained on, only that it is \\u201cexpensive\\u201d annotated data. It might be useful to add a sentence about natural language inference for clarity.\", \"In the conclusion, change \\u201cperformance improvements are less than 1 and less than 2 points on average over the 10 SentEval tasks, respectively\\u201d to \\u201cperformance improvements are less than 2 percentage points on average over the 10 SentEval tasks, respectively\\u201d\", \"It would be nice if you bolded/underlined the best performing numbers in your results tables.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting results though lacks thorough analysis\", \"review\": \"This paper proposes that randomly encoding a sentence using a set of pretrained word embeddings is almost as good as using a trained encoder with the same embeddings. This is shown through a variety of tasks where certain tasks perform well with a random encoder and certain ones don't.\\n\\nThe paper is well written and easy to understand and the experiments show interesting findings. There is a good analysis on how the size of the random encoder affects performance which is well motivated by Cover's theorem.\\n\\nHowever, the random encoders that are tested in the paper are relatively limited to random projections of the embeddings, a randomly initialized LSTM and an echo state network. Other comparisons would make the results significantly more interesting and would move away from the big assumption stated in the first sentence, i.e. that sentence embeddings are: \\\"learned non-linear recurrent combinations\\\". Some major models that are missed by this include paragraph vectors (which do not require any initial training if initialized with pretrained word embeddings), CNNs and Transformers. Given this, the takeaways from this paper seem quite limited to recurrent representations and it's unclear how it would generalize to other representations.\\n\\nAn additional problem is that the paper states that ST-LN used different and older word embeddings which may make the comparison flawed when compared with the random encoders. In this case, the only fairly trained sentence encoder that is compared with is InferSent. The RandLSTM also has an issue in that the biases are intialized around zero whereas it's well known that using an initially higher forget gate bias significantly improves the performance of the LSTM.\\n\\nFinally, the analysis of the results seems weak. The tasks are very different from each other and no reason or potential explanation is given why certain tasks are better than others with random encoders, except for SOMO and CoordInv. E.g. Could some tasks be solved by looking at keywords or bigrams? Do some tasks intrinsically require longer term dependencies? Do some tasks have more data?\", \"other_comments\": \"- The results and especially random encoder results should be shown with confidence intervals.\\n- Section 3.1.3 the text refers to W^r but that does not appear in any equations.\\n\\n=== After rebuttal ===\\nThanks for adding the additional experiments (particularly with fully random embeddings) and result analyses to the paper. I feel that this makes the paper stronger and have raised my score accordingly.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Strong, clear paper with worthwhile contribution\", \"review\": \"This paper tests a number of untrained sentence representation models - based on random embedding projections, randomly-initialized LSTMs, and echo state networks - and compares the outputs of these models against influential trained sentence encoders (SkipThought, InferSent) on transfer and probing tasks. The paper finds that using the trained encoders yields only marginal improvement over the fully untrained models.\\n\\nI think this is a strong paper, with a valuable contribution. The paper sheds important light on weaknesses of current methods of sentence encoding, as well as weaknesses of the standard evaluations used for sentence representation models - specifically, on currently-available metrics, most of the performance achievements observed in sentence encoders can apparently be accomplished without any encoder training at all, casting doubt on the capacity of these encoders - or existing downstream tasks - to tap into meaningful information about language. The paper establishes stronger and more appropriate baselines for sentence encoders, which I believe will be valuable for assessment of sentence representation models moving forward. \\n\\nThe paper is clearly written and well-organized, and to my knowledge the contribution is novel. I appreciate the care that has been taken to implement fair and well-controlled comparisons between models. Overall, I am happy with this paper, and I would like to see it accepted.\", \"additional_comments\": \"-A useful addition to the reported results would be confidence intervals of some kind, to get a sense of the extent to which the small improvements for the trained encoders are statistically significant.\\n\\n-I wonder about how the embedding projection method would compare to simply training higher-dimensional word embeddings from the start. Do we expect substantial differences between these two options?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HylDpoActX | N-Ary Quantization for CNN Model Compression and Inference Acceleration | [
"Günther Schindler",
"Wolfgang Roth",
"Franz Pernkopf",
"Holger Fröning"
] | The tremendous memory and computational complexity of Convolutional Neural Networks (CNNs) prevents the inference deployment on resource-constrained systems. As a result, recent research focused on CNN optimization techniques, in particular quantization, which allows weights and activations of layers to be represented with just a few bits while achieving impressive prediction performance. However, aggressive quantization techniques still fail to achieve full-precision prediction performance on state-of-the-art CNN architectures on large-scale classification tasks. In this work we propose a method for weight and activation quantization that is scalable in terms of quantization levels (n-ary representations) and easy to compute while maintaining the performance close to full-precision CNNs. Our weight quantization scheme is based on trainable scaling factors and a nested-means clustering strategy which is robust to weight updates and therefore exhibits good convergence properties. The flexibility of nested-means clustering enables exploration of various n-ary weight representations with the potential of high parameter compression. For activations, we propose a linear quantization strategy that takes the statistical properties of batch normalization into account. We demonstrate the effectiveness of our approach using state-of-the-art models on ImageNet. | [
"low-resource deep neural networks",
"quantized weights",
"weight-clustering",
"resource efficient neural networks"
] | https://openreview.net/pdf?id=HylDpoActX | https://openreview.net/forum?id=HylDpoActX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxK23JNgE",
"HJgKRv-PkV",
"HygC7nFt07",
"r1lODsYKAQ",
"rylKOcKtR7",
"B1effYtYAm",
"BJx34liHa7",
"B1xV5ifgT7",
"B1gjSZHh2X",
"rJgkkgx15Q"
],
"note_type": [
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544973489204,
1544128464734,
1543244838424,
1543244640020,
1543244400914,
1543244042301,
1541939251742,
1541577612141,
1541325122950,
1538355159501
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper809/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper809/Authors"
],
[
"ICLR.cc/2019/Conference/Paper809/Authors"
],
[
"ICLR.cc/2019/Conference/Paper809/Authors"
],
[
"ICLR.cc/2019/Conference/Paper809/Authors"
],
[
"ICLR.cc/2019/Conference/Paper809/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper809/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper809/AnonReviewer3"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The submission proposes a hierarchical clustering approach (nested-means clustering) to determine good quantization intervals for non-uniform quantization. An empirical validation shows improvement over a very closely related approach (Zhu et al, 2016).\\n\\nThere was an overall consensus that the literature review was insufficient in its initial form. The authors have proposed to extend it somewhat. Other concerns are related to the novelty of the technique (R4 was particularly concerned about novelty over Zhu et al, 2016).\\n\\nTwo reviewers were against acceptance, and one was positive about the paper. Due to the overall concerns about the novelty of the approach, and that these concerns were confirmed in discussion after the rebuttal, this paper is unlikely to meet the threshold for acceptance to ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Area chair recommendation\"}",
"{\"comment\": \"To add to the list of missing references, this paper also does n-ary quantization but it does not use nested means.\", \"https\": \"//arxiv.org/abs/1811.04985\", \"title\": \"Missing references\"}",
"{\"title\": \"Regarding your concerns\", \"comment\": [\"Many thanks for the valuable feedback, we addressed all concerns in the revised version of the paper. In particular:\", \"Ablation study: we agree, an ablation study is required to show the actual benefits of the nested-means clustering. We added the study in the revised submission. However, we were not able to finish the study on the activation clipping interval but we would include it in the next revision as well.\", \"Weight and activation quantization: both, weight and activation quantization are required for inference acceleration. We added more information on how to efficiently calculate these representations for the inference.\", \"Model size and FLOPs: we include an evaluation of both in Sec. 6+7\", \"Related work: selection was limited due to space constraints, but we now also included the provided references.\"]}",
"{\"title\": \"Regarding your questions\", \"comment\": \"We appreciate your feedback on our initial submission. Regarding your questions:\\n\\n1. Gaussian distribution: We observe that l2-regularized weights are close to a zero-mean Gaussian, but actually we only assume that weights are symmetrically distributed around zero (the assumption is only used in nested-means clustering for the initial split at zero). That is, our clustering is also compatible with non-Gaussian distributions and we rephrased our text to be clearer in this regard. Nevertheless, the empirical observation that weights are close to Gaussian is seconded by other work [1][2].\\n\\n2. The focus of this work is on the concept of this quantization scheme, but we admit that more details on reduction of memory footprint and computational workload would be helpful, which is now included in the revised submission (see Table 6 and 7). A detailed study on inference performance on multiple architectures is beyond the scope of this work, as in our experience such experiments require various code optimizations. Otherwise, there would be little value in reporting such performance numbers.\\n\\n3. This is correct, the activation quantization is a standard way for transforming floating-point values into an integer format. Section 4 is discussing an appropriate clipping interval (including how to select the interval) that can filter out the outliers. Our experimental results indicate that this selection is appropriate.\\n\\n4. Sparsity refers to the percentage of zero-valued elements in the weights. For instance, 60% sparsity means that 60% of the weights are zero.\\n\\n[1] Chaim Baskin et. al. UNIQ: uniform noise injection for the quantization of neural networks \\n[2] Charles Blundell et. al. Weight Uncertainty in Neural Networks\"}",
"{\"title\": \"Novelty, comparison and other comments\", \"comment\": [\"Many thanks for the detailed feedback, which helped us to (hopefully) improve the revised version of the paper\", \"Novelty: the difference you point out to (Zhu et al, 2016) is correct, we adopted the gradient-based scaling factors. We evaluated several ways of obtaining the scaling factors but the approach of (Zhu et al, 2016) is the best performing. Our contributions are as follows: (1) a novel clustering approach that achieves better performance and allows for configurable quantization levels without additional hyperparameters. As a result, we achieve 2.6% better Top1 accuracy (Inception on ImageNet) for the ternary (2-bit encoding) representation. The configurable quantization levels enables the quaternary (2-bit encoding) representation which achieves 3.8% better Top1 accuracy without increasing quantization footprint. (2) Activation quantization by arguing about appropriate clipping intervals. (3) An analysis of the inference workload using reduce-scale architecture that minimizes the number of multiplications and substantially reduces the amount of additions.\", \"Comparison to (Zhu et al, 2016): as part of our result discussion we compare to (Zhu et al, 2016), with the same training parameters as for our quantization. The accuracy we obtain is actually higher than the one reported in (Zhu et al, 2016), most likely because we used adaptive learning rate. We hope that this methodology demonstrates the improvement of this quantization compared to prior work.\", \"Activation quantization: we quantize activations differently to weights by a simple linear transformation, because non-uniform activations are extremely difficult to implement efficiently for inference. We addressed this issue in the revised submission.\", \"References: please refer to the general comments, where we discuss our limited selection due to space constraints.\", \"Notation: we changed the notation to a less cluttered notation.\", \"Hyperparameter t: this is correct, t is shared across all layers. However, the number of hyper parameters increases if multiple quantization levels are used.\", \"Batchnormalization: the output of the batchnorm layer is the input for the next convolution layer. Hence, we don\\u2019t have to quantize activations *before* the batchnorm layer in order to accelerate convolutions.\"]}",
"{\"title\": \"General comments to the submission\", \"comment\": \"- We would like to clarify an important difference to previous work that we might not have expressed clearly before. While most recent related work on quantization focuses on binarization and related concepts, which are in particular based on uniform quantization and result in computations based on population count instructions (XNOR/AND and similar work [1][2]), our concept is based on non-uniform quantization (similar to the one proposed in [3][4]) and results in reduce-scale computations [5]. As a result, we can avoid the costly popcount and instead rely on many additions followed by one multiplication per quantisation level and per output feature. As additions are much cheaper than multiplications [6], this concept directly addresses inference acceleration.\\n\\n- Our main results are: (1) we can substantially reduce model footprint and the computational workload (number of operations respectively their precision/type/bitwidth), (2) nested-means seems to be very suitable for neural networks quantization as it partitions in a way that large weights are accurately represented, (3) the resulting performance of such quantized models outperforms prior work, including LQ-Net and TTQ (which we compared by their reported performance and by our own training experiments). We believe these insights to be of value for the research community.\\n\\n-References: There is more work than can be covered given the existing space constraints, so we faithfully selected the most important work (according to our opinion). We believe Table 8 to present a comprehensive overview, but would be happy to extend this as long as readability is maintained. Furthermore, we included the references provided by the reviewers. We believe LQ-Net to be the currently most advanced work, which we actually outperform in accuracy.\\n\\n[1] Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. \\n[2] Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. \\n[3] Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. \\n[4] Incremental network quantization: Towards lossless cnns with low-precision weights. \\n[5] G. Schindler, M. Z\\u00f6hrer, F. Pernkopf, and H. Fr\\u00f6ning. Towards efficient forward propagation on resource-constrained systems.\\n[6] Mark Horowitz. 1.1 computing\\u2019s energy problem (and what we can do about it).\"}",
"{\"title\": \"Limited novelty\", \"review\": \"Summary: This paper proposes a technique for quantizing the weights and activations of a CNN. The main contribution is in replacing the heuristic to find good quantization intervals of (Zhu et al, 2016) with a different heuristic based on a hierarchical clustering algorithm, and empirically validating its effectiveness.\", \"strenghts\": [\"The proposed nested-means heuristic is simple and makes sense intuitively.\", \"The experiments on two modern architectures seem solid and demonstrate good empirical performance.\"], \"weaknesses\": [\"The main weakness is the limited novelty of this paper. The proposed setup is almost identical to the one in (Zhu et al, 2016), except for the replacement of the heuristic to find quantization intervals with another one. While the experiments demonstrate the empirical effectiveness of the method as a whole, what is missing is a direct, controlled comparison between the original heuristic and the proposed one. Now it is hard to tell whether the accuracy increases are obtained through the proposed adaptation or because of other factors such as a better implementation or longer training.\", \"In section 4, it is not made clear whether the activations are quantized according to the same scheme as the weights (apart from the issue of selecting a good clipping interval, which is addressed).\", \"The paper is a bit short on references, considering the many recent works on quantized neural networks.\"], \"minor_comments_and_questions\": [\"The wording is sometimes imprecise, making some arguments hard to follow. Two examples:\", \"-- \\\"Lowering the learning rate for re-training can diminish heavy changes in the weight distribution, at the cost of longer time to converge and the risk to get stuck at plateau regions, which is especially critical for trainable scaling factors\\\"\", \"-- \\\"This approach is beneficial because it defines cluster thresholds which are influenced by large weights that were shown to play a more important role than smaller weights (Han et al., 2015b)\\\"\", \"The title says \\\"for compression and inference acceleration\\\", so it would be nice if the paper reports some compression and timing metrics in the experiments section.\", \"The notation in section 3.1 overly complicated, could probably be simplified a bit for readability.\", \"Section 3.3: \\\"However, having an additional hyperparameter t_i for each scaling factor alpha_i renders the mandatory hyperparameter tuning infeasible.\\\" -> From section 4.2 in (Zhu et al, 2016), I believe the constant factor t is shared across all layers, making it only a single hyperparameter.\", \"Last paragraph of section 4: \\\"(Cai et al., 2017) experimentally showed that the pre-activation distribution after batch normalization are all close to a Gaussian with zero mean and unit variance. Therefore, we propose to select a fixed clipping parameter gamma.\\\". -> But what about the activations *before* the batchnorm layer where the assumption of zero mean and unit variance does not hold?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"in-depth analysis is needed for this paper\", \"review\": \"This paper is about CNN model compression and inference acceleration using quantization. The main idea is to use 'nest' clustering for weight quantization, more specifically, it partitions the weight values by recurring partitioning the weights by arithmetic means and negative of that of that weight clustering.\", \"i_have_several_questions_for_this_paper\": \"1) the main algorithm is mainly based on the hypothesis that the weights are with Gaussian distribution. What happens if the weights are not Gaussian, such as skewed distribution? Seems the outliners will bring lots of issues for this nest clustering for partitioning the weight values.\\n\\n2) Since the paper is on inference acceleration, there is no real inference time result. I think having some real inference time on these quantized models and showing how their inference time speedup is will be interesting.\\n\\n3) Activation quantization in Section 4 is a standard way for quantization, but I am curious how to filter out the outliner, and how to set the clipping interval?\\n\\n4) I am not sure what does the 'sparsity' mean in Table 2? Does this quantization scheme introduce many zeros? Or sparsity is corresponding to the compression ratio? If that is the case, then many quantization algorithms can actually achieve better compression ratios with 2 bits quantization.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A good work in CNN model compression\", \"review\": \"This paper proposes to use n-ary representations for convolutional neural network model quantization. A novel strategy of nested-means clustering is developed to update weights. Batch normalization is also considered in the activation quantization. Experiments on both weight quantization and activation quantization are conducted and show effectiveness.\", \"strengths\": \"1.\\tThe idea of nested-means clustering is interesting, which somehow shows its effectiveness.\\n2.\\tState-of-the-art experimental results.\\n3.\\tThe representation is excellent, and it is easy to follow.\", \"concerns\": \"1.\\tThough the experiment study seems solid, an ablation study is still missing. For example, how important is the nested-means clustering technique? What is the effect if replacing it with the original one or with other clustering methods? What will happen if expanding the interval in the quantization of activation? All these kinds of questions are hard to answer without an ablation study.\\n2.\\tIt is not clear how the weight and activation quantization are addressed together.\\n3.\\tIf counting the first and last layers, what is the size of the model (the number of parameters)?\\n4.\\tSimilarly, what are the FLOPs in different settings of experiments? This seems missing.\\n5.\\tWhen discussing the related work about model compression, there are important references missing. I just list two references in the latest vision and learning literature:\\n[Ref1] X. Lin et al. Towards accurate binary convolutional neural network. NIPS 2017\\n[Ref2] Z. Liu et al. Bi-Real Net: Enhancing the Performance of 1-bit CNNs with Improved Representational Capability and Advanced Training Algorithm. ECCV 2018.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"comment\": \"Pos:\", \"1\": \"Some important references are missing for activation quantization. These two papers propose to learn activation clip scales and have observed significant performance boost.\\n[1]: PACT: PARAMETERIZED CLIPPING ACTIVATION FOR QUANTIZED NEURAL NETWORKS.\\n[2]: Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)\", \"2\": \"The quantization functions for weights and activations are somehow incremental.\", \"neg\": \"\", \"3\": \"In my experience, adding \\\"0\\\" into representation is extremely important to the final performance. But I did not find the results in Table 4 using \\\"Quaternary\\\" when comparing to other approaches. I think this is a little unfair.\", \"4\": \"Adding \\\"0\\\" into representation is a trade-off between accuracy and inference efficiency. Because you cannot merely employ XNOR operations in bit-wise operations. Specifically, you have to use AND and XNOR operations with judgement, which increases implementation difficulty on hard-ware platforms.\", \"title\": \"Some positive and negative comments.\"}"
]
} |
|
HkgDTiCctQ | Knowledge Distillation from Few Samples | [
"Tianhong Li",
"Jianguo Li",
"Zhuang Liu",
"Changshui Zhang"
] | Current knowledge distillation methods require full training data to distill knowledge from a large "teacher" network to a compact "student" network by matching certain statistics between "teacher" and "student" such as softmax outputs and feature responses. This is not only time-consuming but also inconsistent with human cognition in which children can learn knowledge from adults with few examples. This paper proposes a novel and simple method for knowledge distillation from few samples. Taking the assumption that both "teacher" and "student" have the same feature map sizes at each corresponding block, we add a $1\times 1$ conv-layer at the end of each block in the student-net, and align the block-level outputs between "teacher" and "student" by estimating the parameters of the added layer with limited samples. We prove that the added layer can be absorbed/merged into the previous conv-layer \hl{to formulate a new conv-layer with the same size of parameters and computation cost as previous one. Experiments verifies that the proposed method is very efficient and effective to distill knowledge from teacher-net to student-net constructing in different ways on various datasets. | [
"knowledge distillation",
"few-sample learning",
"network compression"
] | https://openreview.net/pdf?id=HkgDTiCctQ | https://openreview.net/forum?id=HkgDTiCctQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxXmGQlx4",
"rye7rQUc1V",
"r1ldl_Nqy4",
"HJlp2q150Q",
"S1gj3tk50X",
"SJxjSu1cRm",
"HylT0D1qAm",
"rye8Ow1cAQ",
"S1lp283D6Q",
"B1e5MapjhQ",
"HkxK78ZK27",
"BJgfkfRUhQ",
"H1x7gjxLnm",
"BJg7ZjdQh7",
"HJgOAKO7nQ",
"SkgVZoJ7h7",
"SkgvhosGhQ",
"rJlpXbFb27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"comment"
],
"note_created": [
1544725019401,
1544344379029,
1544337391881,
1543269044829,
1543268786569,
1543268419232,
1543268308682,
1543268206355,
1542076085338,
1541295378059,
1541113377035,
1540968921877,
1540913898977,
1540750074649,
1540749776501,
1540713211522,
1540697007281,
1540620580898
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper808/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper808/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper808/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/Authors"
],
[
"ICLR.cc/2019/Conference/Paper808/AnonReviewer3"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper considers the problem of knowledge distillation from a few samples. The proposed solution is to align feature representations of the student network with the teacher by adding 1x1 convolutions to each student block, and learning only the parameters of those layers. As noted by Reviewers 1 and 2, the performance of the proposed method is rather poor in absolute terms, and the use case considered (distillation from a few samples) is not motivated well enough. Reviewers also note the method is quite simplistic and incremental.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"limited novelty, unclear motivation\"}",
"{\"title\": \"further response\", \"comment\": \"Thanks for the valuable comments and suggestions. Below further response your concerns.\\n\\n### Hong et al focuses on convex optimization problem\\nWe agree that CNN optimization problem ins non-convex. However, our problem is not a standard CNN optimization problem. The loss function in eq(2) contains multiple disjoint blocks without non-linear activation function in each block (only between two blocks), while all the other network parts are fixed.\\nEven considering the non-linear activation function after each block (Q), the loss is piece-wise linear. \\nWith a prox-linear surrogate, the global convergence can be found by minimizing the prox-linear surrogate as supposed by [1]. \\n[1] Xu, Yangyang, and Wotao Yin. \\\"A globally convergent algorithm for nonconvex optimization based on block coordinate update.\\\" Journal of Scientific Computing 72, no. 2 (2017): 700-734.\\n\\n### BCD requires few samples, why not setting parameter to 1?\\nSorry for the inaccurate descriptions. For our cases, the blocks are multiple disjoint blocks, it is not like those of coordinate descent in which variables are correlated to each other. Due to the disjoint properties, we can use relatively fewer samples to estimate parameters in each block. \\n\\n### performance on zero-student-net not comparable to teacher-net. \\nWe should emphasize that there are some new attempts which try to realize knowledge distillation with few samples, while all of them do not show good performance, including [2] and [3] on MNIST, [4] on CIFAR-10 and MNIST. \\nWe should emphasize again that we have only use quite a few samples without data augmentation for this study, which still achieves much better accuracy than SGD and FitNet. With data augmentation, the accuracy can be improved about 5%. This is acceptable considering such a few original samples used. \\nFurthermore, we emphasize the great benefits of the proposed framework on student-net by\\nextremely decomposed/pruned from teacher-net. We specially emphasize the usage of this setting at the second but last paragraph. \\n\\n[2] Akisato Kimura, Zoubin Ghahramani, Koh Takeuchi, et al. Few-shot learning of neural networks from scratch by pseudo example optimization. (big gap on MNIST SOTA performance with few samples). \\n[3] Raphael Gontijo Lopes, Stefano Fenu, Thad Starner, et al. Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535, 2017.\\n[4] dataset distillation, ICLR 2019 submission.\"}",
"{\"title\": \"Rebuttal read. Concerns remain.\", \"comment\": \"- Hong et al [1] focuses on BCD for convex optimization problem, which is very different from the proposed formulation. So I think its theoretical result has nothing to do with the your method.\\n\\n- \\u201cThe BCD algorithm considers each block separately, thus there are much fewer parameters in each block, so that we could use much fewer samples for the block-level estimation.\\u201d\\n\\nThis explanation sounds weird to me. According to this logic, we can always set the number of parameters in each block to be 1. Then it will become even more sample efficient.\\n\\n- \\u201c83% on CIFAR-10 and about 47% on CIFAR-100\\u201d\\n\\nThose are really bad performances, given that the teacher network can achieve 93.38% and 72.08% for CIFAR-10 and CIFAR-100. Usually, a distillation network can achieve similar performance as the teacher network (see [1]). Then I confirm my conclusion that the proposed technique is far from being used.\\n\\nTherefore, I will keep my rating.\\n\\n[1] Distilling the Knowledge in a Neural Network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean. NIPS 2015.\"}",
"{\"title\": \"Revision list for the updated version\", \"comment\": \"Thanks to the valuable suggestions and comments from reviewers and anonymous commenter, we carefully revise our paper accordingly. Here we list major revision points below.\\n1) Add reasons why knowledge distillation from few samples is important.\\n2) Revise abstract/conclusion to replace ImageNet description with general descriptions.\\n3) Add comparison to two related papers in related work\\n4) Revise Theorem-1 to include the definition of \\u201cabsorb\\u201d, and text after Corollary-1 why Q should be squared.\\n5) Revise text after Eq-2 to clearly present that our algorithm is based on block coordinate descent (BCD), and the advantages of BCD.\\n6) Fig2 add experimental results comparison between FSKD-BCD and FSKD-SGD.\\n7) Add Fig3b to show the BCD accuracy improvement along with block alignment.\\n8) Table1~4, add columns for parameters number and pruned.\\n9) Zero-net adds more descriptions and analysis.\\n10) Appendix-B: the BCD algorithm and one experiment to show the impact of iteration number.\\n11) Appendix-C: Fig5 and Fig6 for illustrations of how FSKD works on filter pruning and network decoupling.\\n12) Appendix-D: iterative pruning and FSKD to achieve extremely pruned network and one experiment (scheme-C) on VGG-16 on CIFAR-10.\\n13) Appendix-E: verification of the hypothesis pointwise convolution is more critical for performance than depthwise convolution.\\n14) Appendix-F: Filter visualization on zero-student-net before SGD, after SGD, and after SGD+FSKD.\"}",
"{\"title\": \"absorbing is defined in the revision\", \"comment\": \"Thanks for the question. We add definition in our revision.\\nPlease see the text we answer to Reviewer-1 for more explanation. \\nWe also visualize how FSKD works for filter pruning/network slimming and the network decoupling cases. \\nPlease check our Appendix-C for more details.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your review and suggestions, we provide our response as follows.\", \"q\": \"Confused bold-face in the table.\\nThanks for point out this problem. The boldface just wants to show the best results by FSKD, which may be not the best for all the cases. We remove the bold-face in the table in our revision.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your review and suggestions, we are happy to address your concerns.\", \"q\": \"Why Q should be squared?\\nSquared Q ensures model compression and connectable to the next block (output channel number matches to the input channel number of next block). We have mentioned this in our revision after Theorem-1.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"Thanks for the valuable comments and suggestions. We give detailed responses to each item below.\\n\\n1. Meaning of \\u201cabsorb\\u201d\\nWe add a 1x1 conv-layer Q \\\\in R^{n_o\\u2019 * n_o * 1 * 1} to student-net after the conv-layer W\\\\in R^{n_o *n_i * k* k} before non-linear layer, \\u201cabsorb\\u201d here means Q can be merged into W to obtain a new conv-layer W\\u2019 \\\\in R^{n_o\\u2019 * n_i *k *k}. If Q is squared (n_o\\u2019=n_o), then W\\u2019 \\\\in R^{n_o * n_i *k*k} has the same size as W. Previously we put this information at appendix-A, and now we revise the description of Theorem-1 to include the information. \\n \\n2&3: why not SGD, why use one-step block coordinate descent?\\nYes, what we use is in fact one-step block coordinate descent (BCD) algorithm. We add a description of our BCD algorithm in the appendix-B, also include experiment comparison to FSKD-SGD (total loss optimization with SGD on all added 1x1 convs\\u2019 parameters together) and FSKD-BCD in the experiments on filter-pruning. The experiments show that FSKD-BCD clearly outperforms FSKD-SGD in all cases. The advantages of the BCD algorithm are also listed in the revision. One major reason is that BCD each time handles few parameters (one block) which can be solved with limited samples, while SGD always takes all added 1x1 convs\\u2019 parameters into consideration, thus requires more data in the optimization.\\nOur experiments do not show benefit from more iterations of BCD.\\nThis may be due to the fact that the added 1x1 conv-layer is before non-linear activations so that one-step linear estimation is accurate enough to get exact minimization. Hong et al [1] show BCD can reach sublinear convergence when each block is exactly minimized, which is consistent with our experiments. We also add Fig3b to illustrate the accuracy improvement along with block alignment sequentially.\\n \\n[1] Hong, Mingyi, et al. \\\"Iteration complexity analysis of block coordinate descent methods.\\\" Mathematical Programming 163.1-2 (2017): 85-114.\\n\\n4. Confusing on SGD+FSKD and FitNet+FSKD\\nWe denote SGD as optimization without using teacher-net info. For the zero-net experiment, SGD+FSKD first uses SGD to initialize the student-net on few samples without using teacher-net info, then uses FSKD to further improve the performance with teacher-net info. While the Fitnet+FSKD first uses FitNet to initialize the student-net on few-samples with teacher-net guidance, then uses FSKD to further improve the performance with teacher-net info from our FSKD perspective. We clarify this in our revision.\\n\\n5. Why FSKD is sample efficient?\\nAs is known, fewer parameters tend to require fewer samples for estimation. The BCD algorithm considers each block separately, thus there are much fewer parameters in each block, so that we could use much fewer samples for the block-level estimation. Our experiments also verify this point when comparing FSKD-BCD to FSKD-SGD in Figure-2, especially when samples < 100. \\n\\n6. Not good on redesigned student-net?\\nYes, in Figure-4, the student-net accuracy is about 83% on CIFAR-10 and about 47% on CIFAR-100. We should emphasize that this result is obtained with a very limited number of training samples and without data augmentation. If data augmentation is enabled, about 5% accuracy improvement could be achieved with FSKD. We design this experiments to demonstrate the effectiveness of FSKD over FitNet and SGD on the few-sample settings, and we do not compare this result with full-data training.\\n\\n7. Only mention results on ImageNet in abstract and conclusion.\\nWe have revised our abstract and conclusion accordingly, even though the results on ImageNet sounds more significant to us.\"}",
"{\"comment\": \"What does it mean with absorb 1x1 conv layer to previous conv layer?\\nI think absorb is too ambiguous for a word\\nCopy weights to previous conv layer?\\nStack trained 1x1 conv on top of previous layer but below ReLU?\\n\\nNeed better explainability, some figure to visualize would help\", \"title\": \"absorb convolution?\"}",
"{\"title\": \"A new formulation of knowledge distillation\", \"review\": \"This paper proposes a framework for few-sample knowledge distillation of convolution neural networks. The basic idea is to fit the output of the student network and that of the teacher network layer-wisely. Such a regression problem is parameterized by a 1x1 point-wise convolution per layer (i.e. minimizing the fitting objective over the parameters of 1x1 convolutions). The author claims such an approach, called FSKD, is much more sample-efficient than previous works on knowledge distillation. Besides, it is also fast to finish the alignment procedure as the number of parameters is smaller than that in previous works. The sample efficiency is confirmed in the experiments on CIFAR-10, CIFAR-100 and ImageNet with various pruning techniques. In particular, FSKD outperforms the FitNet and fine-tuning by non-trivial margins if only small amount of samples are provided (e.g. 100).\", \"here_are_some_comments\": \"1. What exactly does \\u201cabsorb\\u201d mean? Is it formally defined in the paper?\\n\\n2. \\u201cwe do not optimize this loss all together using SGD due to that too much hyper-parameters need tuning in SGD\\u201d. I don\\u2019t understand (1) why does SGD require \\u201ctoo much\\u201d hyper-parameters tuning and (2) if not SGD, what algorithm do you use? \\n\\n3. According to the illustration in 3.3, the algorithm looks like a coordinate decent that optimizing L over one Q_j at a time, with the rest fixed. However, the sentence \\u201cuntil we reach the last block in the student-net\\u201d means the algorithm only runs one iteration, which I suspect might not be sufficient to converge.\\n\\n4. It is also confusing to use the notation SGD+FSKD v.s. FitNet+FSKD, as it seems SGD and FitNet are referring to the same type of terminology. However, SGD is an algorithm, while FitNet is an approach for neural network distillation. \\n\\n5. While I understand the training of student network with FSKD should be faster because the 1x1 convolution has fewer parameters to optimize, why is it also sample-efficient? \\n\\n6. I assume the Top-1 accuracies of teacher networks in Figure 4 are the same as table 2 and 3, i.e. 93.38% and 72.08% for CIFAR-10 and CIFAR-100 respectively. Then the student networks have much worse performance (~85% for CIFAR-10 and ~48% for CIFAR-100) than the teachers. So does it mean FSKD is not good for redesigned student networks?\\n\\n7. While most of the experiments are on CIFAR10 and CIFAR100, the abstract and conclusion only mention the results of ImageNet. Why?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Surprisingly good model distillation given few samples and non-iterative solution, but practical implications are unclear\", \"review\": \"Model distillation can be tricky and in my own experience can take a lot of samples (albeit unlabeled, so cheaper and more readily available), as well as time to train. This simple trick seems to be doing quite well at training students quickly with few samples. However, it departs from most student-teacher training that find its primary purpose by actually outperforming students trained from scratch (on the full dataset without time constraints). This trick does not outperform this baseline, so its emphasis is entirely on quick and cheap. However, it's unclear to me how often that is actually necessary and I don't think the paper makes a compelling case in this regard. I am borderline on this work and could probably be swayed either way.\", \"strengths\": [\"It's a very simple and fast technique. As I will cover in a later bullet point (under weaknesses), the paper does not make it clear why this type of model distillation is that useful (since it doesn't improve the student model over full fine-tuning, unlike most student-teacher work). However, the reason why I do see some potential for this paper is because there might be a use case in quickly being able to adapt a pretrained network. It is very common to start from a pretrained model and then attach a new loss and fine-tune. Under this paradigm, it is harder to make architectural adjustments, since you are starting from a finite set of pretrained models made available by other folks (or accept the cost of re-training one yourself). However, it is unclear how careful one needs to treat the pretrained model if more fine-tuning is going to occur. If for instance you could just remove layers, drop some channels, glue it all together, and then that model would still be reasonable as a pretrained model since the fine-tuning stage could tidy everything up, then this method would not be useful in this situation.\", \"The fact that least squares solvers can be used at each stage, without the need for a final end-to-end fine-tune is interesting.\", \"It is good that the paper demonstrates improvements coupled with three separate compression techniques (Li et al., Liu et al., Guo et al.).\", \"The paper is technically thorough.\", \"It's good that the method is evaluated on different styles of networks (VGG, ResNet, DenseNet).\"], \"weaknesses\": [\"Limited application because it only makes the distillation faster and cheaper. The primary goal of student-teacher training in literature is to outperform a student trained from scratch by the wisdom of the teacher. It ties into this notion that networks are grossly over-parameterized, but perhaps that is where the training magic comes from. Student-teacher training acknowledges this and tries to find a way to benefit from the over-parameterized training and still end up with a small model. I think the same motivation is used for work in low-rank decomposition and many other network compression methods. However, in Table 1 the \\\"full fine-tune\\\" model is actually the clear winner and presented almost as an upper bound here, so the only benefit this paper presents is quick and cheap model distillation, not better models. Because of this, I think this paper needs to spend more time making a case for why this is so important.\", \"Since this technique doesn't outperform full fine-tuning, the goal of this work is much more focused on pure model compression. This could put emphasis on reducing model size, RAM usage reduction, or FLOPS reduction. The paper focuses on the last one, which is an important one as it correlates fairly well with power (the biggest constraint in most on-device scenarios). However, it would be great if the paper gave a broader comparison with compression technique that may have slightly different focus, such as low-rank decomposition. Size and memory usage could be included as columns in tables like 1, along with a few of these methods.\", \"Does it work for aggressive compression? The paper presents mostly modest reductions (30-50%). I thin even if accuracy takes a hit, it could still work to various degrees. From what I can see, the biggest reduction is in Table 4, but FSKD is used throughout this table, so there is no comparison for aggressive compression with other techniques.\", \"The method requires appropriate blocks to line up. If you completely re-design a network, it is not as straightforward as regular student-teacher training. Even the zero-student method requires the same number of channels at certain block ends and it is unclear from the experiments how robust this is. Actually, a bit more analysis into the zero student would be great. For instance, it's very interesting how you randomly initialize (let's say 3x3) kernels, and then the final kernels are actually just linear combinations of these - so, will they look random or will they look fairly good? What if this was done at the initial layer where we can visualize the filters, will they look smooth or not?\"], \"other_comments\": [\"A comparison with \\\"Deep Mutual Learning\\\" might be relevant (Zhang et al.). I think there are also some papers on gradually adjusting neural network architectures (by adding/dropping layers/channels) that are not addressed but seem relevant. I didn't read this recently, but perhaps \\\"Gradual DropIn of Layers to Train Very Deep Neural Networks\\\" could be relevant. There is at least one more like this that I've seen that I can't seem to find now.\", \"It could be more clear in the tables exactly what cited method is. For instance, in Table 1, does \\\"Fine-tuning\\\" (without FitNet/FSKD) correspond to the work of Li et al. (2016)? I think this should be made more clear, for instance by including a citation in the table for the correct row. Right now, at a glance, it would seem that these results are only comparing against prior work when it compares to FitNet, but as I read further, I understood that's not the case.\", \"The paper could use a visual aid for explaining pruning/slimming/decoupling.\"], \"minor_comments\": [\"page 4, \\\"due to that too much hyper-parameters\\\"\", \"page 4, \\\"each of the M term\\\" -> \\\"terms\\\"\", \"page 6, methods like FitNet provides\\\" -> \\\"provide\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Three detailed cases how Q is defined\", \"comment\": \"Thanks for your comments!\\n\\nFirst we clarify the initialization problem. In section 3.3, we conduct two sets of experiments. The first set obtains student-net from compressing teacher-net, including filter pruning, network slimming, and network decoupling. The second set fully redesigned the student-net with a different structure from the teacher-net and random initialized the parameters (i.e., zero net in the paper). \\nFor the first set of experiments, the student-net already has an initialization from original teacher-net's weights. \\nFor the second set of experiment, we start the student-net with random weights, then use SGD to initialize the student-net using few samples before adopting our FSKD. \\n\\nSecond, Q is required to be squared in condition-4 (c4) due to two reasons. \\n(1) Q must be squared to make current layer and next layer connectable. \\n(2) If Q is not squared, it will decrease the compression effect after absorbing Q into previous layer. \\nLet's give an example to explain that. \\nSuppose current conv-layer is 64*64*k^2 (64 channels in and 64 out, k^2 is the spatial kernel size), and next layer is 64*128*k^2 (64 channels in and 128 out). \\nIf we set Q being 64*128*1^2, after absorbing, the current layer will be 64*128*k^2, which can't connect to next layer (with size 64*128*k^2).\\nBesides, it also increases the parameter number and computing cost quite a lot for current layer. \\n\\nThird, we list the 3 cases how Q is defined. \\n(1) For the fully redesigned student network (zero net), we ensure that the corresponding block-level output channels are matched between teacher and student. If the block output channel number is n, then Q is a matrix with size n*n. \\n(2) For the network decoupling case, the regular convolution and depthwise separable block has the same number of output channels, it is also straightforward to define the size of Q. \\n(3) For the pruning and slimming cases, there are two different sub-cases. \\n3a) When there are multi-layers within an alignment block, the student-net may either have less layers or smaller intermediate channels in the block comparing to the teacher-net, but still keep both teacher and student have the same number of output channels for that corresponding block. For instance, the teacher-net has a block with 2 conv-layers (64*64*k^2 followed by 64*128*k^2), the student-net may just have less conv-layers (1 here) in the block as (64*128*k^2). Or the student-net may have smaller number of internal channels in the block as (64*32*k^2, 32*128*k^2), here 32 is the number of output channels for the first layer and number of input channel for the second layer. For both examples, Q should be 128*128, so that it could be absorbed into previous conv-layer. \\n\\n3b) When we do per-layer alignment, suppose the layer of the teacher-net is 64*128*k^2. After pruning, we have the corresponding layer in student-net as 64*64*k^2. The #channels between teacher and student are different here. We split the 128 channels of teacher-net into two parts, pruned 64 channels and unpruned 64 channels. We make a linear regression between the student-net and the unpruned 64 channels of teacher-net, so that we defined the matrix Q with size 64*64. For the first layer, the alignment matrix Q is an identical matrix since the unpruned part of teacher-net is copied to the student-net. However, when moving to next layer, for teacher-net, the input information will come from all the 128 channels of previous conv-layer. But for the student-net, the input information will come from only the 64 channels of previous conv-layer. There is obvious information loss, so that we estimate Q to alleviate this information loss. \\n \\nWe will clarify this in our revision with text explanation and figure illustrations.\"}",
"{\"title\": \"Clarification on non-square Q\", \"comment\": \"I'm a bit confused by the fact that in sec 3.3, Q is said to be square (satisfy c4). Why is this always satisfied here, because I thought the whole point was that the student might have fewer channels than the teacher. I guess there are a couple of different situations to consider (such as reducing channels, or reducing layers). In the former, the way I understood the training was as following: We start with after the first layer (let's say teacher output has 128 channels and student 64). We construct a Q with shape (64, 128, 1, 1) and and assume the first layer in the student is the same as the teacher. We solve it and absorb Q into this weight for the student. Note, at this point if the student also had 128 channels output, there would be no need to do solve for a Q. Next, we look at the activations of the teacher after the second layer (let's say it's still 128). This is where we run into the first issue I'm not sure how to address, since the weights of the teacher layer is no longer compatible with the student, so we cannot use it anymore. We would have to first absorb the inverse of the previous Q into this weight, to get us back to 128 channels going into that layers. I guess that's when you don't copy weights from the teacher and instead initialize randomly (zero student).\\n\\nAnyway, a bit more clarity on when you can re-use original teacher weights and when you have to randomly initialize - as well as when Q is square and when Q is non-square. Thanks!\"}",
"{\"title\": \"Clarification on Q\", \"comment\": \"Thanks for your comment!\\n\\nWe use Q to denote the parameters of the added 1x1 conv-layer, the size is nxmx1x1, where n is the input channel number, m is the output channel number, 1x1 is the kernel size. The 4D tensor is then degraded to a matrix with size nxm. Q acts as a linear combination of the input and output channels. For more information about the 1x1 convolution, please refer to [1]. \\nTo be more specific, suppose Q_{ij} is the element of the matrix Q at i-th row and j-th column, it reflects the combination coefficient between input channel i and output channel j. This is how we represent Q as a matrix.\\n\\n[1] Min Lin, Qiang Chen, and Shuicheng Yan. \\\"Network in network.\\\" arXiv preprint arXiv:1312.4400 (2013).\"}",
"{\"title\": \"Clarification on the optimization process\", \"comment\": \"Thanks for your comment!\\n\\nYes, our problem can be optimized using SGD with loss function defined in Eq(2) . However, we did not use it to report results in the paper. Instead, we estimate the 1x1 conv-layer parameter Q by solving the least squared problem layer by layer sequentially. \\n\\nTo be more specific, given randomly selected few-samples, we forward the data to the alignment point of the first block in both student network and teacher network, and obtain the feature map responses at this point. Suppose the teacher network response is X^t, and the student-net response is X^s, we obtain Q using X^s and X^t with Eq(1). Then based on our Theorem-1, we absorb the 1x1 conv defined by Q into previous conv-layer. After that, we move to the alignment point of the next block, and repeat this procedure until we reach to the final alignment point. This simple solution works well since the alignment point is before non-linear activation function. Figure-3 shows the block-level correlation before and after alignment between teacher and student, which also demonstrate this effectiveness of this linear approximation. \\n\\nWe use this procedure instead of the SGD based optimization due to the following reasons. \\n\\n(1) We in fact implement the SGD based solution on the filter-pruning and slimming experiments. But we did not find noticeable results difference between these two solutions. \\n\\n(2) SGD requires tuning several hyper parameters, while our simple solution is hyper-parameter free. We find it is relatively difficult to tune SGD based solution on the network decoupling case due to multi-branch network structures. There are no advantages on time budget over the proposed simple solution.\\n\\n(3) Our experiments demonstrates the proposed simple solution works pretty well for the cases, and we also have some figures which illustrates steady accuracy improvement during block-by-block alignment. And we will include that in the revision. \\n\\nWe will also make our source code available in the near future.\"}",
"{\"title\": \"A practical method\", \"review\": \"In this paper, an efficient re-training algorithm for neural networks is proposed. The essence is like Hinton's distillation, but in addition to use the output of the last layer, the outputs of intermediate layers are also used. The core idea is to add 1x1 convolutions to the end of each layer and train them by fixing other parameters. Since the number of parameters to train is small, it performs well with the small number of samples such as 500 samples.\\n\\nThe proposed method named FKSD is simple yet achieves good performance. Also, it performs well with a few samples, which is desirable in terms of time complexity. \\n\\nThe downside of this paper is that there is no clear explanation of why the FKSD method goes well. For me, adding 1x1 convolution after the original convolution and fitting the kernel of the 1x1 conv instead of the original kernel looks a kind of reparametrization trick. Of course, learning 1x1 conv is easier than learning original conv because of a few parameters. However, it also restricts the representation power so we cannot say which one is always better. Do you have any hypothesis of why 1x1 conv works so well?\", \"minor\": \"The operator * in (1) is undefined.\\n\\nWhat does the boldface in tables of the experiments mean? I was confused because, in Table 1, the accuracy achieved by FKSD is in bold but is not the highest one.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"I don't know \\\"Q is degarded to the matrix form\\\". Could you tell what is the specific operation here? Any other references?\\n\\nThanks\\uff01\", \"title\": \"About \\\"Q is degarded to the matrix form\\\"\"}",
"{\"comment\": \"I'm trying to reproduce your results in Sec. 4, but had a question about the the optimize of loss in Sec. 3.3 Algorithm 1:\\n1. not use SGD optimize the loss, instead by Algorithm 1, but how does it work well, I can't understand here. What more can you describe?.\\n\\n\\n\\nThanks!\", \"title\": \"Question about the optimize of loss\"}"
]
} |
|
SyVU6s05K7 | Deep Frank-Wolfe For Neural Network Optimization | [
"Leonard Berrada",
"Andrew Zisserman",
"M. Pawan Kumar"
] | Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm or its adaptive variants. However, SGD requires a hand-designed schedule for the learning rate. In addition, its adaptive variants tend to produce solutions that generalize less well on unseen data than SGD with a hand-designed schedule. We present an optimization method that offers empirically the best of both worlds: our algorithm yields good generalization performance while requiring only one hyper-parameter. Our approach is based on a composite proximal framework, which exploits the compositional nature of deep neural networks and can leverage powerful convex optimization algorithms by design. Specifically, we employ the Frank-Wolfe (FW) algorithm for SVM, which computes an optimal step-size in closed-form at each time-step. We further show that the descent direction is given by a simple backward pass in the network, yielding the same computational cost per iteration as SGD. We present experiments on the CIFAR and SNLI data sets, where we demonstrate the significant superiority of our method over Adam, Adagrad, as well as the recently proposed BPGrad and AMSGrad. Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while often converging faster. The code is publicly available at https://github.com/oval-group/dfw. | [
"optimization",
"conditional gradient",
"Frank-Wolfe",
"SVM"
] | https://openreview.net/pdf?id=SyVU6s05K7 | https://openreview.net/forum?id=SyVU6s05K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eK3RXJxE",
"Bkg1zXr01N",
"S1ekiBNT0m",
"Hkg1XJI3AX",
"SJlpdCRsCX",
"SkeVgPnFRX",
"SJebCbtF67",
"SygOKWFYTm",
"Syei-gKKTm",
"r1g2QHW0h7",
"HklQyb9637",
"BJlCnxODhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544662704691,
1544602375474,
1543484823202,
1543425814523,
1543396981313,
1543255787700,
1542193609392,
1542193535742,
1542193154569,
1541440804013,
1541411034665,
1541009589656
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper807/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"ICLR.cc/2019/Conference/Paper807/Authors"
],
[
"ICLR.cc/2019/Conference/Paper807/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper807/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper807/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper was judged by the reviewers as providing interesting ideas, well-written and potentially having impact on future research on NN optimization. The authors are asked to make sure they addressed reviewers comments clearly in the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Metareview\"}",
"{\"title\": \"Results with data augmentation\", \"comment\": \"We have performed additional experiments on the CIFAR data sets using data augmentation. We summarise here our findings, and we will provide more details in future versions of the paper.\\n\\nIn order to account for the additional variance introduced by the data augmentation, we allow the batch size to be chosen as 1x, 2x or 4x, where x is the original value of batch-size. Because of the heavy computational cost of the cross-validation (we tune the batch-size, regularization and initial learning rate), we provide results for SGD, DFW and the best performing adaptive gradient method, which is AMSGrad. For SGD the hyper-parameters are kept the same as in (Zagoruyko and Komodakis, 2016) and (Huang et al, 2017), and in particular benefits from hand-designed learning rate schedules. We refer to the Wide Residual Network architecture as WRN, and the DenseNet architecture as DN (details are available in the paper).\", \"we_obtain_the_following_results\": [\"WRN CIFAR-10 : AMSGrad 90.06, DFW 94.52, SGD 95.40\", \"DN CIFAR-10 : AMSGrad 91.78, DFW 94.73, SGD 95.26\", \"WRN CIFAR-100: AMSGrad 67.75, DFW 76.12, SGD 77.78\", \"DN CIFAR-100 : AMSGrad 69.58, DFW 73.85, SGD 76.26\", \"Comparing these results to Tables 1 and 2, it can be observed that all methods benefit from data augmentation, though with varying increases of performance. DFW systematically and significantly outperforms AMSGrad. In particular, it does so by more than 8% in the WRN-100 case.\"]}",
"{\"title\": \"Our DenseNet architecture has 40 layers and not 250\", \"comment\": \"The architecture pointed out in the comment above uses 250 layers. In our experiments, and as specified in section 5.1 of our paper, we use a model with 40 layers. This explains the difference in performance.\", \"as_we_have_already_stated\": [\"Since all of the experiments use the same network and same training, the comparison proposed in this work is valid and fair.\", \"We have verified that our implementation can reproduce the results reported in (Zagoruyko and Komodakis, 2016) when data augmentation is used, and will provide results using data augmentation once these are available.\"]}",
"{\"comment\": \"In Table 2 of Huang et al 2017 (https://arxiv.org/abs/1608.06993 ), they reported the accuracies of 94.81 and 80.36 without data augmentation for CIFAR 10 and CIFAR 100, respectively. These are still significantly better than your reported baselines 92.02 and 70.33, especially on CIFAR 100.\\n\\nFurthermore, note that their reported results are only for DenseNet (k=24). For k=40, the results should be even better (very likely to be around 95.XX and 81.XX respectively).\", \"title\": \"Your baseline is still terrible without data augmentation.\"}",
"{\"title\": \"The difference is due to data augmentation\", \"comment\": \"The comment above has pointed out a discrepancy between our results and those from (Zagoruyko and Komodakis, 2016). This is due to the fact that in contrast to (Zagoruyko and Komodakis, 2016), we do not use data augmentation in our CIFAR experiments. Since none of the baselines nor DFW makes use of data augmentation in our experiments, the comparison proposed in this work is valid and fair.\\n\\nIn its current version, the description of our experiments on the CIFAR datasets mistakenly indicates that we use data augmentation, which is not the case. We will correct this in future versions.\\n\\nAs a sanity check, we have verified that our implementation can reproduce the results reported in (Zagoruyko and Komodakis, 2016) when training the model with SGD and with data augmentation.\\n\\nWe will provide results using data augmentation once these are available.\"}",
"{\"comment\": \"You applied to the architecture WRN-40-4 to CIFAR10 and CIFAR 100.\\n\\nAs can be seen in Tables 1 and 2, SGD only achieves 90.08 and 66.78 on CIFAR 10 and 100, respectively.\\n\\nIn the original WRN paper (Zagoruyko and Komodakis, 2016 https://arxiv.org/abs/1605.07146 ), the reported results are 95.03 and 77.21 in Table 4. These results are reproducible:\", \"https\": \"//github.com/szagoruyko/wide-residual-networks\\n\\nSimilar things also happened to DenseNet. Huang et al 2017 (https://arxiv.org/abs/1608.06993 ) reported 96.54 and 82.82 in Table 2, but yours are 92.02 and 70.33.\\n\\nCompared with the results in Zagoruyko and Komodakis, 2016, the proposed deep FW algorithm is significantly worse. This is a huge difference!\\n\\nWRN and DenseNets are two of the most popular architectures, and their good baseline performance is well-known!\", \"title\": \"Why are your baselines so terrible?\"}",
"{\"title\": \"Thanks for the comments.\", \"comment\": \"We thank the reviewer for their comments and suggestions. We answer below:\\n\\n1. As the reviewer accurately points out, we choose to always employ the hinge loss for DFW in this paper because it gives an optimal step-size. In the new version of the paper, we have included additional baselines on the SNLI data set. This provides more empirical comparisons between the performance of CE and SVM for different optimizers.\\n\\n2. In appendix B.2 of the paper, we have added the convergence plot for all methods on the CIFAR data sets. \\n\\nIn some cases the training performance can show some oscillations. We emphasize that this is the result of cross-validating the initial learning rate based on the validation set performance: sometimes a better behaved convergence would be obtained on the training set with a lower learning rate. However this lower learning rate is not selected because it does not provide the best validation performance (this is consistent with our discussion on the step size in section 6).\"}",
"{\"title\": \"Thanks for the comments. Clarification.\", \"comment\": \"We thank the reviewer for their comments. We provide answers below:\\n\\n* \\u201cThe DFW linearizes the loss function into a smooth one, and also adopts Nesterov momentum to accelerate the training.\\u201d\", \"we_would_like_to_clarify_this_statement\": [\"one of the key ideas of the DFW algorithm is not to linearize the loss function $\\\\mathcal{L}$, but only the model $f$.\", \"\\u201cBoth techniques have been widely used in the literature for similar settings\\u201d.\", \"We wish to clarify the main technical contributions of this paper, since the SVM smoothing and the application of Nesterov acceleration are not the main novelty of this work. We discuss the summary of contributions (available at the end of section 1 of the paper) in the context of technical novelty.\", \"Employing a composite framework allows us to use an efficient primal-dual algorithm. As stated by Reviewer 1, this is novel in the context of deep neural networks: \\u201cTo my knowledge, the submission is the first sound attempt to adapt this type of Dual-based algorithm for optimization of Deep Neural Network [..]\\u201d.\", \"Crucially, our approach yields an update at the same computational cost per iteration as SGD and with the same level of parallelization. In contrast, in the closest approach to ours, the algorithm of Singh & Shawe-Taylor (2018) can only process a single sample at a time. This results in an approach whose runtime is virtually multiplied by the batch-size (it would be slower by two orders of magnitude in typical classification settings, including for the experiments of this paper).\", \"We do not mean to claim that the application of Nesterov acceleration is a technical novelty in itself. However, its use is subtle in our case (see appendix A.7) and it is empirically crucial for good performance, hence its mention in the paper.\", \"To the best of our knowledge, the hyper-parameter free smoothing approach that we propose in this work is novel (but is not the main contribution).\", \"We have adapted the abstract and summary of contributions to focus on the main novelty, which is an optimization algorithm for deep neural networks with an optimal step-size at the same computational cost per iteration as SGD.\", \"If the reviewer remains concerned by a lack of novelty, we would be grateful if he/she could provide specific references so that we can compare them in detail with the DFW algorithm.\"]}",
"{\"title\": \"Thanks for the detailed review and the suggestions.\", \"comment\": \"We thank the reviewer for their detailed review and for their suggestions. We answer point by point:\\n\\n*FW vs BCFW*\\nThe (primal) proximal problem is created for a mini-batch of samples, and not for the entire data set (details in section 3.2). In other words, the primal problem consists of the proximal term which encourages proximity to the current iterate, the linearized regularization, and the average over the mini-batch of the losses applied to the linearized model. As a result, we can compute the Frank-Wolfe update for all dual coordinates simultaneously, and we do not need to operate in a block-coordinate fashion. We have included this clarification in the new version of the paper.\\n\\n*Batch-Size*\\nWe thank the reviewer for this suggestion. We have adapted the description of Algorithm 1 accordingly.\\n\\n*Convex-Conjugate Loss*\\nIn order to compare the DFW algorithm to the strongest possible baselines, we choose the baselines to use the CE loss in the CIFAR experiments. Indeed we have generally found CE to help the baselines in this setting. In addition, the hand-designed learning rate schedule of SGD and the l2 regularization were originally tuned for CE. \\nIn the case of the SNLI data set, we allow the baseline to use either CE or SVM because using the hinge loss can increase their performance.\\nFinally, we choose to always employ the multi-class hinge loss for DFW because it gives an optimal step-size in closed form for the dual, which is a key strength of the formulation.\\n\\n*BCFW vs BCD*\\nWe thank the reviewer for this recommendation. It would be interesting indeed to explore how to exploit such updates in the context of the composite minimization framework for deep neural networks. In our case, we emphasize that for speed reasons, it is crucial to process the samples within a mini-batch in parallel, and this does not look straightforward with the algorithm in [3, E.3]. Therefore we believe that for this setting the FW algorithm permits faster updates thanks to an easy parallelization over the mini-batch on GPU.\\n\\n\\n*Hyper-parameter*\\nCounting a single hyper-parameter for SGD implicitly assumes that SGD can employ a constant step-size. Using such a constant step-size for SGD would incur a significant loss of performance (e.g. at least a few percents on the CIFAR data set). Therefore in order to obtain good performance, SGD requires a manual schedule of the learning rate, which involves many hyper-parameters to tune in practice.\"}",
"{\"title\": \"Interesting approach with room for improvement\", \"review\": \"Dual Block-Coordinate Frank-Wolfe (Dual-BCFW) has been widely used in the literature of non-smooth and strongly-convex stochastic optimization problems, such as (structural) Support Vector Machine. To my knowledge, the submission is the first sound attempt to adapt this type of Dual-based algorithm for optimization of Deep Neural Network, which employs a proximal-point method that linearizes not the whole loss function but only the DNN (up to the logits) to form a convex subproblem and then deal with the loss part in the dual.\\n\\nThe attempt is not perfect (actually with a couple of issues detailed below), but the proposed approach is inspiring and I personally would love it published to encourage more development along this thread. The following points out a couple of items that could probably help further improve the paper.\\n\\n*FW vs BCFW*\\n\\nThe algorithm employed in the paper is actually not Frank-Wolfe (FW) but Block-Coordinate Frank-Wolfe (BCFW), as it minimizes w.r.t. a block of dual variables belonging to the min-batch of samples.\\n\\n*Batch Size*\\n\\nThough the algorithm can be easily extended to the min-batch case, the author should discuss more how the batch size is interpreted in this case (i.e. minimizing w.r.t. a larger block of dual variables belonging to the batch of samples) and the algorithmic block (Algorithm 1) should be presented in a way reflecting the batch size since this is the way people use an algorithm in practice (to improve the utilization rate of a GPU).\\n\\n*Convex-Conjugate Loss*\\n\\nThe Dual FW algorithm does not need to be used along with the hinge loss (SVM loss). All convex loss function can derive a dual formulation based on its convex-conjugate. See [1,2] for examples. It would be more insightful to compare SGD vs dual-BCFW when both of them are optimizing the same loss functions (either hinge loss or cross-entropy loss) in the experimental comparison.\\n\\n[1] Shalev-Shwartz, Shai, and Tong Zhang. \\\"Stochastic dual coordinate ascent methods for regularized loss minimization.\\\" JMLR (2013)\\n[2] Tomioka, Ryota, Taiji Suzuki, and Masashi Sugiyama. \\\"Super-linear convergence of dual augmented Lagrangian algorithm for sparsity regularized estimation.\\\" JMLR (2011).\\n\\n*BCFW vs BCD*\\n\\nActually, (Lacoste-Julien, S. et al., 2013) proposes Dual-BCFW to optimize structural SVM because the problem contains exponentially many number of dual variables. For typical multiclass hinge loss problem the Dual Block-Coordinate Descent that minimizes w.r.t. all dual variables of a sample in a closed-form update converges faster without extra computational cost. See the details in, for example, [3, appendix for the multiclass hinge loss case].\\n\\n[3] Fan, Rong-En, et al. \\\"LIBLINEAR: A library for large linear classification.\\\" JMLR (2008).\\n\\n*Hyper-Parameter*\\n\\nThe proposed dual-BCFW still contains a hyperparameter (eta) due to the need to introduce a convex subproblem, which makes its number of hyperparameters still the same to SGD.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The proposed DFW lacks of sufficient novelty and the presented performance improvement needs more theoretical justification.\", \"review\": \"This paper proposes a Frank-Wolfe based method, called DFW, for training Deep Network. The DFW method linearizes the loss function into a smooth one, and also adopts Nesterov Momentum to accelerate the training. Both techniques have been widely used in the literature for similar settings. This paper mainly focuses on the algorithm part, but only empirically demonstrate the convergence results.\\n\\nAfter reading the authors\\u2019 feedback and the paper again, I think overall this is a good paper and should be of broader interest to the broader audience in machine learning community. \\n\\nIn Section 6.1, the authors mention the good generalization is due to large number of steps at a high learning rate. Can we possibly get any theoretical justification on this? \\n\\nThis paper uses multi class hinge loss as an example for illustration. Can this approach be applied for structure prediction, for example, various ranking loss?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good Paper\", \"review\": \"This paper introduced a proximal approach to optimize neural networks by linearizing the network output instead of the loss function. They demonstrate their algorithm on multi-class hinge loss, where they can show that optimal step size can be computed in close form without significant additional cost. Their experimental results showed competitive performance to SGD/Adam on the same network architectures.\\n\\n1. Figure 1 is crucial to the algorithm design as it aims to prove that Loss-Preserving Linearization (LPL) preserves information on loss function. While the authors provided numerical plots to compare it with the SGD linearization, I personally prefer to see some analytically comparsion between SGD linearization and LPL even on the simplest case. An appendix with more numerical comparisons on other loss functions might also be insightful. \\n2. It seems LPL is mainly compared to SGD for convergence (e.g. Fig 2). In Table 2 I saw some optimizers end up with much lower test accuracy. Can the authors show the convergence plots of these methods (similar to Figure 2)?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJG8asRqKX | A Deep Learning Approach for Dynamic Survival Analysis with Competing Risks | [
"Changhee Lee",
"Mihaela van der Schaar"
] | Currently available survival analysis methods are limited in their ability to deal with complex, heterogeneous, and longitudinal data such as that available in primary care records, or in their ability to deal with multiple competing risks. This paper develops a novel deep learning architecture that flexibly incorporates the available longitudinal data comprising various repeated measurements (rather than only the last available measurements) in order to issue dynamically updated survival predictions for one or multiple competing risk(s). Unlike existing works in the survival analysis on the basis of longitudinal data, the proposed method learns the time-to-event distributions without specifying underlying stochastic assumptions of the longitudinal or the time-to-event processes. Thus, our method is able to learn associations between the longitudinal data and the various associated risks in a fully data-driven fashion. We demonstrate the power of our method by applying it to real-world longitudinal datasets and show a drastic improvement over state-of-the-art methods in discriminative performance. Furthermore, our analysis of the variable importance and dynamic survival predictions will yield a better understanding of the predicted risks which will result in more effective health care. | [
"dynamic survival analysis",
"survival analysis",
"longitudinal measurements",
"competing risks"
] | https://openreview.net/pdf?id=rJG8asRqKX | https://openreview.net/forum?id=rJG8asRqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HyerHQoEx4",
"BklnXbdiAQ",
"H1xTERssTm",
"Byl2p6ispm",
"HkeZh6jj6Q",
"rylMoI9i67",
"SkgmC8sqhQ",
"HylrU8I52m",
"S1xPXXp_hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545020221050,
1543368995570,
1542336052590,
1542335940459,
1542335912800,
1542330010438,
1541220042968,
1541199436532,
1541096222539
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper806/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper806/Authors"
],
[
"ICLR.cc/2019/Conference/Paper806/Authors"
],
[
"ICLR.cc/2019/Conference/Paper806/Authors"
],
[
"ICLR.cc/2019/Conference/Paper806/Authors"
],
[
"ICLR.cc/2019/Conference/Paper806/Authors"
],
[
"ICLR.cc/2019/Conference/Paper806/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper806/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper806/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"While there was disagreement on this paper, reviewers remained unconvinced about the scalability and novelty of the presented work. While it was universally agreed that many positive points exist in this paper, it is not yet ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview for dynamic survival analysis paper\"}",
"{\"title\": \"Re: Answers to the additional reviewer's response.\", \"comment\": \"We thank the reviewer for the feedback.\\n\\n2 . We do acknowledge that the point process-based approaches can utilize the covariate information (i.e., history of measurements) for prediction. \\nWe also agree that the point process-based approaches can be applied to the \\\"first-hitting time\\\" analysis by limiting the maximum number of event count to one. However, since point processes predict the next event time given the history of previous events, it is a more natural choice for studying the occurrence of recurrent events rather than first-hitting time event, where no other events nor the recurrent event can be observed once an event occurs.\"}",
"{\"title\": \"Re: Seems like a solid survival analysis work, but not a good fit for ICLR\", \"comment\": \"We thank the reviewer for the valuable comments. Please find the answers below:\\n\\n1.\\t\\nTo highlight the scalability of Dynamic-DeepHit to multiple events, we provided a new set of experiments in the revised paper that show how well our network handles competing events by illustrating the performance improvement over the benchmarks. We reported the results in Table 7 in Appendix F of the revised paper; they include the followings:\\n\\ni) We further categorized the death causes of the CF dataset into three: 1) respiratory failure, 2) complications due to lung transplant, and 3) other causes. (Note that the first submission included only two causes which were the respiratory failure and other causes.) Then, we provided the discriminative performance for these three causes.\\n\\nii) We added comparisons with the cause-specific version of Dynamic-DeepHit, where the proposed network is trained in a cause-specific manner (as the same as it is described for cs-Cox in Section 4.2) \\u2013 the network learns the distribution of the first hitting time for each cause by treating the other causes as a form of right-censoring.\\n\\niii) We compared a parametric version of Dynamic-DeepHit by replacing the current output layer to model the underlying survival process with the Exponential distribution. (This parametric version is motivated from Question1 of Reviewer 3.) Owing to model specification, this parametric version greatly reduces the number of output nodes (to the number of parameters for defining the Exponential distribution for each cause) while having potential limitations due to model mis-specification. Please refer to Appendix B.3 for the detailed description.\\n\\nAs seen in the table, our network outperforms the benchmarks even when we further categorized the competing events of the CF dataset into three causes. When compared to the cause-specific versions and the parametric version, Dynamic-DeepHit achieved performance gain for most of the tested prediction and evaluation times. This implies that our network benefits from jointly learning the distribution of first hitting times of competing events (without any assumption about the underlying survival process) and scales well to multiple causes. Please refer to Appendix F in the revised paper for details.\\n\\n2.\\t\\nThe \\u201ccause-specific time-dependent concordance index (C-index)\\u201d in (6) is based on the idea of concordance [1] \\u2013 a patient who experiences an event earlier should have a higher risk than a patient who survived longer. In the longitudinal setting, we need to account for the time at which the risk prediction is issued (to capture which longitudinal measurements are used as inputs) and the time at which we evaluate the discriminative performance (to capture possible changes in the estimated CIFs over time). Thus, the C-index defined in (6) provides the discriminative performance that reflects these time dependencies.\\n\\n3.\\t\\nIn PBC dataset, we have two event labels: i) death from liver failure and ii) liver transplant. We considered receiving a liver transplant as a competing event of death from liver failure since it hinders the liver failure from being observed during the study. (This is common in clinical studies; a similar example can be found in [2].) In Table 2, we only provided the performance for the death from liver failure since our interest is to assess the risk of having liver failure, not to assess the probability of receiving a liver transplant.\", \"references\": \"[1] F. E. Harrell et al., \\u201cEvaluating the Yield of Medical Tests,\\u201d Journal of the American Medical Association, 1982.\\n[2] M. Noordzij et al., \\u201cWhen Do We Need Competing Risks Methods for Survival Analysis in Nephrology?,\\u201d Nephrol Dial Transplant, 2013.\"}",
"{\"title\": \"Re: Good Empirical Performance, Questionable Scalability (2)\", \"comment\": \"We thank the reviewer for the valuable comments. Please find the answers below:\\n\\n2.\\t\\nThe cause-specific cumulative incidence function in (1) implies the probability that a particular event k^{*} occurs on or before time \\\\tau^{*} given the history of longitudinal measurements \\\\mathcal{X}^{*}. Thus, from (1), we can assess the cause-specific risk of a patient as a function of time given the longitudinal measurement of this patient. Similarly, (2) implies the survival probability of a patient with longitudinal measurements \\\\mathcal{X}^{*} until \\\\tau^{*} (i.e., no event occurs on or before time \\\\tau^{*}). We clarified the explanation of (1) and (2) in the revised paper, accordingly.\\n\\n3.\\t\\n\\\\mathbf{y}_{j} is the output of the shared subnetwork at time stamp j which indicates the step-ahead estimate of time-varying covariates, i.e., \\\\mathbf{x}_{j+1}. These step-ahead predictions are used in the prediction loss (i.e. \\\\mathcal{L}_{3}) to regularize the shared subnetwork. We clarified the explanation in page y the explanation in page 6.\\n\\n4.\\t\\nIn the paper, we used the term \\u201cdynamic\\u201d to differentiate the survival analysis on the basis of the longitudinal data from the static survival analysis which makes survival predictions based only on the current covariates. As shown in Figure 4, the CIF in (1) can be updated when new measurements are collected while incorporating the previous longitudinal measurements.\\n\\n5.\\t\\nWe thank the reviewer for suggesting the comparison with the related work in [2] (hereafter, RMTPP). Although both methods utilize an RNN structure to model the nonlinear dependency over the history of information, they address very different problems. \\nRMTPP is built upon the marked temporal counting process whose goal is to predict the time to the next event and the indicator (marker) of that event given the history of previous events and markers. More specifically, utilizing the RNN structure, RMTPP issues these predictions at time stamps only at which the event and marker information is available. Thus, this method is suitable for modeling recurrent events, not the first hitting event that needs to be conditioned on the covariates, not on the previous of events. \\nOn the other hand, Dynamic-DeepHit is based on the first hitting time analysis (a.k.a. the time-to-event analysis) given the longitudinal measurements (not the previous event indicators). To account for the longitudinal measurements, our network utilizes an RNN structure as an encoder such that the cause-specific subnetworks and the output layer make outcomes based on the last hidden state of the RNN structure (overall, a sequence to time-to-event architecture). To our best knowledge, this paper is the first to investigate a deep learning approach for longitudinal time-to-event data on the basis of competing risks.\", \"reference\": \"[2] N. Du et al., \\u201cRecurrent marked temporal point processes: embedding event history to vector,\\u201d KDD, 2016.\"}",
"{\"title\": \"Re: Good Empirical Performance, Questionable Scalability (1)\", \"comment\": \"We thank the reviewer for the valuable comments. Please find the answers below:\\n\\n1.\\n We believe that the scalability issue is less likely to happen in the cause-specific subnetworks since their number of layers and nodes are chosen from hyper-parameter optimization -- the scalability issue due to the increased number of competing events should have been mitigated by selecting a smaller number of layers and nodes to avoid overfitting.\\nWe thank the reviewer for the suggestion of utilizing a different output layer to address the reviewer\\u2019s concern. However, we devised the current architecture for accomplishing the following two objectives. First, in the survival analysis under competing risks, our interest is to estimate the cause-specific CIF in (1) which shows the cumulative failure rates over time due to a particular cause [1]. This differs from the conventional classification or regression problem since the estimated CIF must satisfy two unique characteristics: i) the CIF is a function of both covariates and time-to-event that outputs a probability value in [0,1] and ii) it is a non-decreasing function of time-to-event. Second, we avoid using (semi-)parametric model that might suffer from mis-specification issues and fully exploit the representational capacity of neural networks by directly estimating the joint distribution of the first hitting time and the competing events to estimate the CIF as defined in (3). \\nTo highlight the aforementioned points and how well our model can scale to multiple causes, we provided three additional results in the revised paper: \\ni) results on the CF dataset by further categorizing the competing risks into three causes,\\nii) comparison with cause-specific versions of Dynamic-DeepHit, and \\niii) a parametric version of Dynamic-DeepHit by replacing the current output layer to model the underlying survival process. For detailed descriptions of the first two results, please refer to Answer 1 to Reviewer 2. \\nFor the parametric version of Dynamic-DeepHit which was motivated by the reviewer\\u2019s comment, we modified the current output layer to model the underlying survival process with the Exponential distribution. Owing to model specification, this parametric version greatly reduces the number of output nodes (to the number of parameters for defining the Exponential distribution for each cause) while having potential limitations due to model mis-specification. Please refer to Appendix B.3 for the detailed description.\\nAs seen in Table 7 in Appendix F of the revised paper, our network outperforms the benchmarks even when we categorized the competing events of the CF dataset into three causes. When further compared Dynamic-DeepHit with its cause-specific versions and its parametric version, our proposed network achieved performance gain for most of the tested prediction and evaluation times. This implies that our network benefits from jointly learning the distribution of first hitting times of competing events (without any assumption about the underlying survival process) and scales well to multiple causes. Please refer to Appendix F in the revised paper for details.\", \"reference\": \"[1] J. P. Fine and R. J. Gray, \\u201cA proportional hazards model for the subdistribution of a competing risk,\\u201d Journal of the American Statistical Association, 1999.\"}",
"{\"title\": \"Re: Novel modelling framework for well-motivated research problem\", \"comment\": \"We thank the reviewer for suggesting further investigation regarding missing data. Indeed, we can easily extend Dynamic-DeepHit to flexibly handle the missing measurements by modifying two parts of the proposed network. First, we utilize mask vectors, m_{j} for j = 1, \\\\cdots, M, that indicate which covariates are missing, as an auxiliary input of the network along with corresponding covariate vectors, x_{j} for j = 1, \\\\cdots, M. (This approach has been used to handle missing measurements in time-series data [1,2] although they focused on in-hospital setting where measurements are frequently observed.) Then, we backpropagate the prediction loss (i.e. the \\\\matchal{L}_{3}) that only corresponds to the step-ahead predictions, y_{j}, for the covariates, x_{j+1}, for j = 1, \\\\cdots, M-1, that are not missing. In Appendix H of the revised paper, we provided descriptions of how our network can be extended to flexibly handle the missing data and the performance comparison with and without these missing indicators.\\n\\nReferences\\n[1] J. Yoon et al., \\u201cDeep Sensing: Active Sensing using Multidirectional Recurrent Neural Networks,\\u201d ICLR 2018.\\n[2] Z. Che et al., \\u201cRecurrent Neural Networks for Multivariate Time Series with Missing Values,\\u201d arXiv preprint arXiv:1606.01865, 2016.\"}",
"{\"title\": \"Seems like a solid survival analysis work, but not a good fit for ICLR\", \"review\": \"Summary:\\nThe authors propose Dynamic-DeepHit, a survival analysis framework for modeling longitudinal data with multiple competing risks. As opposed to previous works, Dynamic-DeepHit can model survival events (e.g. death, cancer relapse) which can be driven by multiple, potentially competing, underlying risks. The proposed model uses an RNN shared across multiple risks for processing past-to-recent measurements, and multiple feedforward nets that accept the most recent measurements and the hidden layer of shared RNN. Joint predictions (across time and competing risks) are made using a softmax layer. The model is tested on two datasets where Dynamic-DeepHit outperforms other baselines.\", \"pros\": [\"Detailed explanation of survival analysis formulation.\", \"Experiments across multiple aspects: prediction performance, explaining the variable importance, visualizing the RNN hidden states\"], \"issues\": [\"As the selling point of this model is its ability to capture competing risks, it is not very convincing that the experiments were conducted with only two competing risks. Can Dynamic-DeepHit truly capture multiple competing risks?\", \"The prediction performance was measured by \\\"cause-specific time-dependent concordance index\\\", which is described by Eq.5. But Eq.5 alone does not intuitively explain what it is trying to measure.\", \"Mayo Clinic data also has two competing risks, but Table 2 only shows the prediction performance for one risk, with the justification \\\"liver transplant prediction is not in our interest\\\". For the thoroughness of the experiments, why not put the complete result?\", \"All other issues aside: I can see that the authors put considerable effort into this work. But the effort is mainly focused on survival analysis, rather than learning representations. The novelty of this work regarding learning representations seems limited to me, as opposed to the contribution on improving survival analysis & medical prediction. This work would be much better received if submitted to a more relevant venue.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novel modelling framework for well-motivated research problem\", \"review\": \"The authors present a novel deep learning representation for jointly modelling longitudinal measurements and dynamic time-to-event analysis where there are competing risks for a given event. The authors incorporate patient-level historical data using an RNN which allows updating of individual-level (i.e. personalized) risk predictions as additional data points are collected. This method (Dynamic-DeepHit) makes no assumptions about the underlying stochastic processes. The authors further evaluate the clinical utility of these methods in terms of interpretability of variable importance and dynamic risk predictions.\\nThe work is clearly structured and clearly articulate a well-motivated research problem. It is also extremely well-placed within the historical context of previous work done in survival modelling. The authors have carried out an extensive review of the literature showing the evolution as well as the strengths and weaknesses of these methods.\\nMy main concern with this manuscript is the handling of missing data. In the context of this study, the evaluation of missing data was inadequately investigated. This is an important problem within the context of what the authors are trying to achieve. Although it may be outside the scope of the current manuscript, different assumptions regarding missing data should be investigated. For example, if missing data was correlated with a particular outcome or a particular covariate, then replacing missing values with interpolation or with the mean and mode would lead to biased estimates.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good Empirical Performance, Questionable Scalability\", \"review\": \"The paper proposes a deep architecture that conducts survival analysis from longitudinal data where multiple competing risks are present. Experimental results demonstrate the effectiveness of the proposed method. Specific comments follow:\\n\\n1. A primary concern in the reviewer's opinion is the scalability of the architecture. While the reviewer appreciates the discussion of the scalability issue in terms of the output layer in the paper, the architecture might also not scalable if the number of competing risks is large because of the increase of the cause-specific subnetworks in the architecture. Overall, the reviewer finds the lack of a principled approach to deal with competing risks and long time horizon presented in the paper. Since dealing with competing risks in survival analysis is the goal of the paper, the reviewer finds the method presented insufficient for acceptance. As a remedy, for example, for the output layer, can the author consider the use of a neural net to model o_k at a particular time using time and f_{c_k}() as input?\", \"other_issues\": \"2. it will be nice to explain (1) and (2) a little after presenting the formula.\\n3. page 5, $\\\\mathbf{y}_j$ should also be explained because the next place where $\\\\mathbf{y}_j$ is present is (4), which is one page later.\\n4. the term \\\"dynamic survival analysis\\\" is also obscure. What exactly does \\\"dynamic\\\" mean? To the reviewer's understanding, compared to standard survival analysis, the architecture models directly from raw longitudinal data of repeated measurements, and hence is called \\\"dynamic\\\".\\n5. even the \\\"dynamic\\\" part of the dynamic survival analysis is not very novel, see, for example,\", \"recurrent_marked_temporal_point_processes\": \"Embedding Event History to Vector\\nand the follow-up works in the use of deep learning for point process modeling.\\n\\n===============After Reading Authors' Response ================\\nThe reviewer would like to thank the authors for their detailed response and careful revision of the paper to address the reviewer's concern. However, the reviewer is not persuaded by the authors' response. Specifically,\\n\\n1. the reviewer is not satisfied with the explanation and modification to address the scalability issue stemming from both the cause-specific subnetworks and the output layer. Simplifying the structure and parameterization of cause-specific subnetworks when many are present seems like a comprise rather than a principled approach to address the issue. The same is true for the exponentially distributed parameterization of the output layer.\\n\\n2. It is the reviewer's impression that for point process neural networks, it is possible to use the covariate information for prediction, as opposed to the claim given by the author.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
S1fUpoR5FQ | Quasi-hyperbolic momentum and Adam for deep learning | [
"Jerry Ma",
"Denis Yarats"
] | Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover. Finally, we propose a QH variant of Adam called QHAdam, and we empirically demonstrate that our algorithms lead to significantly improved training in a variety of settings, including a new state-of-the-art result on WMT16 EN-DE. We hope that these empirical results, combined with the conceptual and practical simplicity of QHM and QHAdam, will spur interest from both practitioners and researchers. Code is immediately available. | [
"sgd",
"momentum",
"nesterov",
"adam",
"qhm",
"qhadam",
"optimization"
] | https://openreview.net/pdf?id=S1fUpoR5FQ | https://openreview.net/forum?id=S1fUpoR5FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1eHkNjYSE",
"HJxMhhTL1V",
"BygWhwctA7",
"ryxZv8qKAm",
"ByeKaCR-07",
"rygjk-aiaX",
"H1lwIdcuTQ",
"BylCh-9OTm",
"Bket2ivu6Q",
"Hyxyo6TDT7",
"B1x95wpPaQ",
"ryg5hkUDTX",
"rJe2cWND67",
"rkxTldzvT7",
"HkebiJLLT7",
"SyeWFkLL67",
"rJlL-18UTX",
"H1eadSvGpX",
"SklIQuS-Tm",
"SkezCLH-6m",
"HklAFD5lpX",
"Bklzj4_9n7",
"BJxvbVG9h7"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review"
],
"note_created": [
1550590940577,
1544113321844,
1543247784659,
1543247448754,
1542741697390,
1542340835051,
1542133839436,
1542132150468,
1542122417083,
1542081943318,
1542080401801,
1542049714445,
1542042003852,
1542035445276,
1541984152603,
1541984121359,
1541983998257,
1541727605036,
1541654558105,
1541654217775,
1541609350094,
1541207193832,
1541182463307
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper805/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"ICLR.cc/2019/Conference/Paper805/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper805/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper805/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for your interest in our work!\", \"comment\": \"Thank you for your interest in our work!\\n\\nAs discussed here and elsewhere, these sorts of substitutions (although tempting) are incorrect; I encourage you to perform a closer reading of Appendix A and specifically to do the sum decomposition for your proposed substitution.\\n\\nHowever, at the end of the day seeing is believing. I also encourage you to empirically compare QHM and momentum with your substitution -- for example, you might try to recover QHM (nu=0.7, beta=0.999) with momentum using your proposed substitution. You will undoubtedly see that the optimizers behave differently even on a toy problem.\"}",
"{\"metareview\": \"This paper presents quasi-hyperbolic momentum, a generalization of Nesterov Accelerated Gradient. The method can be seen as adding an additional hyperparameter to NAG corresponding to the weighting of the direct gradient term in the update. The contribution is pretty simple, but the paper has good discussion of the relationships with other momentum methods, careful theoretical analysis, and fairly strong experimental results. All the reviewers believe this is a strong paper and should be accepted, and I concur.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"simple but useful extension of NAG, with good discussion of related work\"}",
"{\"title\": \"Thanks for the followup!\", \"comment\": \"Thanks for the follow-up, and we are glad that the reviewer enjoyed the paper!\"}",
"{\"title\": \"Thanks for revisiting the assessment! Response to remaining concerns.\", \"comment\": [\"We thank the reviewer for their generous revisiting of their assessment! Our latest update to the manuscript addresses the reviewer's remaining concerns as follows:\", \"We have explicitly stated in Section 5 of the main text that the stability properties of QHAdam discussed come from the tighter step size bound.\", \"We have briefly elaborated on the need for large beta_2 in Appendix F.\", \"We have moved the proof of Fact F.1 inline.\"]}",
"{\"title\": \"Thank you for your reply\", \"comment\": \"Thanks for your clarifications. I am retaining my rating; I maintain that this is a good paper and endorse it for publication.\"}",
"{\"title\": \"Thank you for your response. Review score updated.\", \"comment\": \"#1 This looks good!\\n\\n#2 I think that the new additions to the paper do a great job of distinguishing QHM and AggMo while exposing their similarities. I am not sure that I agree with the two works being entirely orthogonal, but I think that the revision is more than fair in its comparison of the two.\\n\\n#3 I understand what you are saying. While you should weigh your presentation against the opinions of the many, as a reviewer it is my job to give feedback from my position. I still believe that the main paper struggles from some of the issues I presented in my initial review. However, the appendix does seem easier to read and the additions to the AccSGD section are good. Though we are left in disagreement, I think overall that my issue is a minor point which has been addressed to some extent.\\n\\nI don't have any specific recommendations beyond what I said in my initial review. However, I respect that my bias may be in conflict with other feedback you have received.\\n\\n#4 & 5 In my initial review I did not have time to explore Appendix F. I must confess that I still have not been able to cover all of the details. However, I am still not completely convinced by some aspects of QHAdam. In particular, some of the theoretical arguments in the appendix. Disproving the step size bound seems interesting, though I do not entirely understand the significance. It seems the key theoretical argument for QHAdam over Adam is the ability to recover a tighter step size bound. Perhaps this should be made clear in the main text (expanding on \\\"it is possible that setting v_2 < 1 can improve stability\\\"). Moreover, why is this method of reducing the step size more effective than simple reducing beta_2 in Adam? You claim that small beta_2 values can lead to slow convergence, how does reducing v_2 instead correct this?\\n\\nThank you for clarifying the empirical results. After taking a more careful look, I agree that QHAdam seems worthwhile to include. I am not familiar with NMT optimization, is the idea of a spiky gradient distribution well established? While I acknowledge QHAdam gives a significant win on this task, I am not yet convinced by the proposed explanation. However, I do not see this as a critical component of the paper.\\n\\nTo summarize, with your explanation here I am more convinced by the empirical results than on my first reading.\\n\\n#6 It is impossible to get a second first-impression, but I feel that in general the clarity has been improved. \\n\\n- Minor point: Why relegate Fact F1 proof to appendix G?\\n\\nThank you for addressing the points I raised. After reading your response, I am more convinced that the paper should be accepted and have thus increased my original score from 6 to 7.\"}",
"{\"title\": \"adaptive and AM methods\", \"comment\": \"We are aware of the observed poor generalization ability of Adam, and we note that quite a few manuscripts submitted to this conference seek to address this issue. This issue is out of scope for our manuscript, but we note that our results extend beyond the training dataset, as depicted by the figures.\\n\\nWe note that Figs 5abc and 5def use identical settings, as do 5ghi and 5jkl. For our rationale for not showing them side-by-side, please refer to our response to AnonReviewer2.\\n\\nWe (the authors) are not qualified to intelligently comment on or compare to AM methods, as we are only familiar in passing with the relevant modern literature. We suspect that you are in a much better position to speak to your question :)\"}",
"{\"comment\": \"Dear Authors:\\n Thank you for illustration. I need to read Appendix F in detail before making comments. As for Figure 5, I just wonder why Adam disappears in some of sub-figures like Figure 5(a),(b) and (c). Did Adam outperform QHAdam in these figures or some other reasons? I am just curious about it. One more recommendation is that authors can also show the performance of the QHMAdam on the test data. The reason is that the Adam works well in training data, but may generalize poorly on the test set. See Figure 2 in [1]. \\n Let us explore a bit further. As you know, SGD is a dominant method for deep learning. However, recently, alternating minimization(AM) is also attracting researchers' interest because AM can avoid gradient explosion and provide convergence guarantees[2][3]. It is easy to implement AM in parallel, and it allows for non-differentiable activation functions like Relu. AM includes the Alternating Direction Method of Multipliers(ADMM) and Block Coordinate Descent(BCD). What is your opinion on the comparison between SGD and AM? \\n Finally, thank you again for patient explanation and hope this paper will be accepted in the ICLR conference.\\n Sincerely yours\\n[1] Zhang, Guoqiang, and W. Bastiaan Kleijn. \\\"Training Deep Neural Networks via Optimization Over Graphs.\\\" 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. https://arxiv.org/pdf/1702.03380.pdf\\n[2] Taylor, Gavin, et al. \\\"Training neural networks without gradients: A scalable admm approach.\\\" International Conference on Machine Learning. 2016.\\n[3] Global Convergence in Deep Learning with Variable Splitting via the Kurdyka-\\u0141ojasiewicz Property. https://arxiv.org/abs/1803.00225\", \"title\": \"About SGD and Alternating Minimization\"}",
"{\"title\": \"QHAdam\", \"comment\": \"Firstly, QHAdam is in Figure 5 -- specifically, (d), (e), (f), (j), (k), (l).\\n\\nThere is an intuitive advantage and a theoretically grounded advantage.\\n\\nThe intuitive advantage is that whatever benefits interpolation provides for non-adaptive methods (i.e. all the theoretical results for QHM) translate to adaptive methods. This is strictly intuitive for now -- we do not provide any theoretical demonstrations of accelerated QHAdam convergence, only empirical results.\\n\\nThe theoretically grounded advantage is stability. Adam's updates to the parameters can be much larger than can be dealt with during training. In fact, they can be much larger than previously believed -- in our manuscript, we disprove the step size bound claimed in [5] (the original Adam paper), which had been taken as fact in subsequent literature. QHAdam offers a way to mitigate this without simply cutting the learning rate and thus making training slower; this is discussed in much theoretical depth in Appendix F, and empirically validated primarily by the NMT case study.\\n\\n[5] Kingma & Ba, https://arxiv.org/abs/1412.6980\"}",
"{\"comment\": \"Dear Authors:\\n Thank you very much for providing useful learning materials. I really appreciate it. One question is about the comparison between QHMAdam and Adam. You have conducted various experiments to illustrate the effectiveness of QHMAdam. Some figures(e.g. Figure 1) show that the QHMAdam outperformed Adam. But the Adam did not appear in other figures(e.g. Figure 5). Could you please explain what are the advantages of QHMAdam over Adam? Thank you very much.\", \"title\": \"Thank you for materials, About the comparison between QHMAdam and Adam.\"}",
"{\"title\": \"Critical point convergence for GD methods\", \"comment\": \"Demonstrations of critical point convergence for GD methods (in the general smooth+non-convex setting) are most likely absent from recent literature. We recommend various online course materials, such as [1] and [2].\\n\\nOf course, there are various restrictions one can impose for non-convex settings that will yield more interesting results (e.g. convergence to a local optimum with known rate) -- this is the focus of much recent literature! As a sampler, you might check out [3] and [4].\\n\\n[1] D. Papailiopoulos, http://papail.io/teaching/901/scribe_09.pdf\\n[2] C. Sa, http://www.cs.cornell.edu/courses/cs6787/2017fa/Lecture7.pdf\\n[3] Ge et al., https://arxiv.org/abs/1503.02101\\n[4] Lee et al., https://arxiv.org/abs/1602.04915\"}",
"{\"comment\": \"Thank you for the feedback. I appreciate it. I am interested in the global convergence of critical points. Could you then recommend some literature of critical point convergence of SGD in the nonconvex setting? I have gone through most SGD papers but did not find any literature related to this field. Thank you.\", \"title\": \"About Critical Point Convergence\"}",
"{\"title\": \"Thanks for the interest in our paper!\", \"comment\": \"Thanks for the interest in our paper!\\n\\nWe are not aware of any compelling convergence results for gradient descent and momentum (and other common algorithms) in a general non-convex setting \\u2014 the best one can do is critical point convergence.\\n\\nAs QHM is a simple interpolation between the two, QHM similarly does not have any compelling convergence results in a general non-convex setting.\"}",
"{\"comment\": \"Dear Authors:\\n Thank you for presenting an interesting work on the optimization of deep learning problems. Could you please provide the convergence analysis of your proposed QHM in the nonconvex deep learning setting? This is because as for the SGD related methods, the convergence seems to be proved in the convex case, like Adam. Thank you very much.\\n Sincerely yours\", \"title\": \"About Convergence Analysis of QHM for Deep Learning Problems\"}",
"{\"title\": \"Review response -- thanks for the feedback! [Part 2 of 2]\", \"comment\": \"# 4 & 5\\n\\nWe acknowledge that formal convergence analysis is not provided for QHAdam. Nevertheless, we believe that the contradiction of the widely-accepted Adam step size bound from Kingma & Ba (2015) and QHAdam's theoretically grounded ability to tighten this bound is of substantial interest. We believe that we have indeed demonstrated the empirical usefulness of this with the NMT case study. Increasing from 60% to 100% robustness is a large improvement, and an increase of 0.3 BLEU from an optimizer change alone is viewed as fairly significant in the NMT community.\\n\\nWith regards to the EMNIST classification parameter sweeps, we seek to compare our algorithms with their own vanilla counterparts (i.e. QHAdam > Adam), without meticulously tuning the QHAdam and QHM curves to look comparable with one another. We note that there is a certain non-standard LR schedule for (QH)Adam which surpasses the results shown for QHM. However, for the purposes of this study, we believe it best to stick to the standard Adam LR. More generally, we lament the trend of comparing adaptive and non-adaptive methods side-by-side when the terms of comparison are questionable at best. Fair comparison of adaptive and non-adaptive methods is likely a suitable subject for an entirely new paper.\\n\\nFinally, we wish to make a broader point regarding the \\u201ccase study\\u201d experiments. Our primary goal in performing these case studies is to demonstrate practically realistic scenarios. Thus, we did not perform systematic sweeps to squeeze all possible performance out of the algorithms. Rather, we approached the case studies as we felt a practitioner would, relying on intuition to translate the vanilla optimizer to the QH optimizer. In that light, we believe that the case study results as a whole are compelling:\\n- We observe *much* faster convergence in image recognition and marginal/neutral results in final validation accuracy. In general, one should not expect significant differences in final validation accuracy on the standard ResNet+ImageNet combo, assuming that the optimizer has trained the model to convergence.\\n- We observe respectably lower perplexity in language modeling. Note that though the SD bars overlap here, the results are still statistically significant (at the 0.1% confidence level) since using 10 seeds results in a reduced standard error.\\n- We observe neutral results in reinforcement learning.\\n- We observe notable robustness and performance improvements in NMT, as discussed above. The graph is primarily for illustrative purposes, since the metric of interest is BLEU (which is only highlighted in the table).\\n\\n# 6/Overall\\n\\nWe hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the incredibly thorough commentary of our manuscript.\"}",
"{\"title\": \"Review response -- thanks for the feedback! [Part 1 of 2]\", \"comment\": \"We thank the reviewer for their encouraging and constructive feedback.\\n\\nThe reviewer has offered a large number of insightful comments, which is particularly appreciated given the exigence of the review request. For convenience, we address them by number:\\n\\n# 1\\n\\nWe concur with the reviewer's suggestion and have updated Section 3 of the manuscript to provide this brief summary.\\n\\n# 2\\n\\nWe appreciate the pointer to the AggMo algorithm (Lucas et al., 2018), which proposes the additive use of many momentum buffers with different values of beta (the momentum constant). We had tried this in independent preliminary experimentation (toward analyzing many-state optimization), and we found that using multiple momentum buffers yields negligible value over using a single slow-decaying momentum buffer and setting an appropriate immediate discount (i.e. QHM with high beta and appropriate nu). Given the added costs and complexity of using multiple momentum buffers, we decided against discussing many-state optimization.\\n\\nWe believe that the two papers are largely orthogonal, as one paper focuses in depth on two-state optimization, while the other more broadly explores many-state optimization. However, in light of AggMo's existence, we believe it is valuable to comment on the relationship between QHM and AggMo. Specifically, we have updated the manuscript as follows:\\n- In section 4.5, we briefly connect QHM to AggMo.\\n- In Appendix H, we provide a supplemental discussion and comparison with AggMo. Specifically, we perform the autoencoder study from Appendix D.1 of Lucas et al. (2018) with both algorithms, using the EMNIST dataset. In short, we believe that the results of this comparison support the above notion from our preliminary experimentation.\\n\\n# 3\\n\\nWe appreciate the feedback on the presentation of Section 4. We have attempted to cater to a diverse audience across the practitioner-theorist spectrum, and the strongest feedback we received pre-submission is that many readers on both ends of the spectrum appreciate to have in the main text only:\\n- The analytical form (i.e. update rule) of the discussed algorithm, and brief efficiency discussion\\n- The succinct \\u201cupshot\\u201d as it relates to QHM (i.e. narrative summary of the recovery result)\\n\\nand for the mathematical derivations and specific recovery parameterizations to be relegated to the appendix. In particular, we have received feedback that the matrix machinery required for most of the recoveries detracts from the main text, and any detailed derivations depend on this machinery.\\n\\nIn recognition of the reviewer's concerns, we have updated Appendix C of the manuscript to be more structured and self-contained (essentially, a more detailed version of Sections 4.2 through 4.4), so that the more theory-minded audience might have an easier time reading without having to switch back-and-forth between Appendix C and the main text.\\n\\nWe would very much welcome suggestions on what specific facts merit inclusion in the main paper, besides the analytical forms of the update rules and narrative relation to QHM.\\n\\nRegarding AccSGD specifically, we have updated the manuscript to more clearly explain the one-way nonrecovery (both in the main text and in the appendix). We believe that our current method of showing this nonrecovery (via NAG) is the most accessible, while revealing a useful erratum in the prior work of Kidambi et al. (2018).\"}",
"{\"title\": \"Review response -- thanks for the feedback!\", \"comment\": \"We thank the reviewer for their encouraging and constructive feedback. We are heartened that the reviewer has found the algorithms useful for their own applications!\\n\\n# Using multiple momentum buffers\\n\\nWe appreciate the pointer to the AggMo algorithm (Lucas et al., 2018), which proposes the additive use of many momentum buffers with different values of beta (the momentum constant). We had tried this in independent preliminary experimentation (toward analyzing many-state optimization), and we found that using multiple momentum buffers yields negligible value over using a single slow-decaying momentum buffer and setting an appropriate immediate discount (i.e. QHM with high beta and appropriate nu). Given the added costs and complexity of using multiple momentum buffers, we decided against discussing many-state optimization.\\n\\nWe believe that the two papers are largely orthogonal, as one paper focuses in depth on two-state optimization, while the other more broadly explores many-state optimization. However, in light of AggMo's existence, we believe it is valuable to comment on the relationship between QHM and AggMo. Specifically, we have updated the manuscript as follows:\\n- In section 4.5, we briefly connect QHM to AggMo.\\n- In Appendix H, we provide a supplemental discussion and comparison with AggMo. Specifically, we perform the autoencoder study from Appendix D.1 of Lucas et al. (2018) with both algorithms, using the EMNIST dataset. In short, we believe that the results of this comparison support the above notion from our preliminary experimentation.\"}",
"{\"title\": \"The paper presents some interesting results but I found some of the content hard to follow\", \"review\": \"Edit: Following response, I have updated my score from 6 to 7.\\n\\nI completed this review as an emergency reviewer - meaning that I had little time to complete the review. I did not have time to cover all of the material in the lengthy appendix but hope that I explored the parts most relevant to my comments below.\", \"paper_summary\": \"The paper introduces QHM, a simple variant of classical momentum which takes a weighted average of the momentum and gradient update. The authors comprehensively analyze the relationships between QHM and other momentum based optimization schemes. The authors present an empirical evaluation of QHM and QHAdam showing comparable performance with existing approaches.\", \"detailed_comments\": \"I'll use CM to denote classical momentum, referred to as \\\"momentum\\\" in the paper.\\n\\n\\n1) In the introduction, you reference gradient variance reduction as a motivation for QHM. But in Section 3 you defer readers to the appendix for the motivation of QHM. I think that the main paper should include a brief explanation of this motivation.\\n\\n2) The proposed QHM looks quite similar to a special case of Aggregated Momentum [1]. It seems that the key difference is with the use of damping but I suspect that this can be largely eliminated by using different learning rates for each velocity (as in Section 4 of [1]) and/or adopting damping in AggMo. In fact, Section 4.1 in your paper recovers Nesterov momentum in a very similar way. More generally, could one think of AggMo as a generalization of QHM? It averages plain SGD and several momentum steps on different time scales.\\n\\n3) I thought that some of the surprising relations to other momentum based optimizers was the most interesting part of the paper. However, I found the presentation a little difficult. There are many algorithms presented but none are explored fully in the main paper. I had to flick between the main paper and appendix to uncover the information I wanted most from the paper.\\n\\nMoreover, I found some of the arguments in the appendix a little tough to follow. For example, with AccSGD you should specify that epsilon is a constant typically chosen to be 0.7. When the correspondence to QHM is presented it is not obvious that QHM -> AccSGD but not the other way around. I would suggest that you present a few algorithms in greater detail, and list the other algorithms you explore at the end of Section 4 with pointers to the appendix.\\n\\n4) I am not sure that the QHAdam algorithm adds much to the paper. It is not explored theoretically and I found the empirical analysis fairly limited.\\n\\n5) In general, the empirical results support QHM as an improvement on SGD/NAG. But I have some (fairly minor) concerns.\\n\\n a) For Figure 1, it looks like QHM beats QHAdam on MLP-EMNIST. Why not show these on the same plot? This goes back to my point 4 - it does not look like QHAdam improves on QHM and so I am not sure why it is included. The idea of averaging gradients and momentum is general - why explore QHAdam in particular?\\n\\n b) For Figure 2, while I certainly appreciate the inclusion of error bars, they suggest that the performance of all methods are very similar. In Table 3, QH and the baselines are often not just within a standard deviation of eachother but also have very close means (relatively).\\n\\n6) I feel that some of the claims made in the paper are a little strong. E.g. \\\"our algorithms lead to significantly improved training in a variety of settings\\\". I felt that the evidence for this was lacking.\\n\\n\\nOverall, I felt that the paper offered many interesting results but clarity could be improved. I have some questions about the empirical results but felt that the overall story was strong. I hope that the issues I presented above can be easily addressed by the authors.\", \"minor_comments\": [\"I thought the use of bold text in the introduction was unnecessary\", \"Some summary of the less common tasks in Table 2 should be given in the main paper\"], \"clarity\": \"I found the paper quite difficult to follow in places and found myself bouncing around the appendix frequently. While the writing is good I think that some light restructuring would improve the flow.\", \"significance\": \"The paper presents a simple tweak to classical momentum but takes care to identify its relation to existing algorithms. The empirical results are not overwhelming but at least show QHM as competitive with CM on tasks and architecture where SGD is typically dominant.\", \"originality\": \"To my knowledge, the paper presents original findings and places itself well amongst existing work.\", \"references\": \"[1] Lucas et al. \\\"Aggregated Momentum: Stability Through Passive Damping\\\" https://arxiv.org/pdf/1804.00325.pdf\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks for the interest in our paper!\", \"comment\": \"Thanks for the interest in our paper!\\n\\nIn short, momentum cannot recover QHM via this rewriting. Please refer to the discussion thread under AnonReviewer3 for further details.\"}",
"{\"title\": \"Review response -- thanks for the feedback!\", \"comment\": \"We thank the reviewer for their encouraging and constructive feedback.\\n\\n# QHM vs. momentum\\n\\nWe appreciate the reviewer raising this potential point of confusion, and we would like to emphasize that replacing beta with (nu * beta) in momentum *does not* recover QHM.\\n\\nAnalytically, we note that replacing beta with (nu * beta) in Equation 2 propagates nu into the momentum buffer (g_t) via Equation 1, ultimately changing the decay rate of the momentum buffer from beta to (nu * beta). Intuitively, we note that QHM constitutes the *complete* decoupling of the momentum buffer's decay rate (beta) from the current gradient's contribution to the update rule (1 - nu * beta). In contrast, momentum tightly couples the decay rate (beta) and the current gradient's contribution (1 - beta).\\n\\nIt is crucial to understand this difference as it reveals QHM's added expressivity over momentum, and we concur that more explicit discussion of this difference would be beneficial. We have updated the manuscript as follows:\\n- Appendix A.8 analytically demonstrates the difference between the two, in terms of the weight on each past gradient.\\n- Section 3 of the main text briefly and intuitively describes the added expressive power of QHM over momentum, in line with the above explanation.\\n\\n# Incrementality\\n\\nWe appreciate the reviewer's honest assessment of the incrementality of the approach, but respectfully disagree. In the interest of accessibility, we have intentionally presented the simplest possible exposition of the algorithm, rather than the various more complex formulations possible with our original motivation. On first principles, we believe that this simplicity is a benefit rather than a disadvantage. Yet this simplicity belies both theoretical and practical power. Theoretically, we have demonstrated that many powerful but opaque optimization algorithms (essentially, all two-state linear first-order optimizers) boil down to decoupling the momentum buffer's decay rate from the current gradient's weight, and we have presented the most direct and efficient method to do so. And practically, we have demonstrated improvements that are at least as significant as the improvement between plain SGD and momentum/NAG.\\n\\nAlthough we wish to err toward understating rather than overstating our contributions, we would be deeply appreciative of any suggestions the reviewer could offer to improve the articulation of these points in the manuscript.\"}",
"{\"comment\": \"Hi,\\n\\nI'm confused by the update rule of QHM. What's the difference between QHM and plain momentum method? From my perspective, we can rewrite eqn (3) and (4) with eqn (1) and (2) but change *beta* to *v beta*. If so, what's the advantage of QHM as we can always tune *beta*.\", \"title\": \"Question about eqn (3) and (4)?\"}",
"{\"title\": \"Simple idea. Impressive results. Some discussion needed to be more convincing.\", \"review\": \"Update after the author response: I am changing my rating from 6 to 7. The authors did a good job at clarifying where the gain might be coming from, and even though I maintain that decoupling the two variables is a simple modification, it leads to some valuable insights and good results which would of interest to the larger research community.\\n\\n-------\\nIn this paper the authors propose simple modifications to SGD and Adam, called QH-variants, that can not only recover the \\u201cparent\\u201d method but a host of other optimization tricks that are widely used in the applied deep learning community. Furthermore, the resulting method achieves better performance on a suit of different tasks making it an appealing choice over the competing methods. \\n\\nTraining a DNN can be tricky and substantial efforts have been made to improve on the popular SGD baseline with the goal of making training faster or reaching a better minima of the loss surface. The paper introduces a very simple modification to existing algorithms with surprisingly promising results. For example, on the face of it, QHM which is the modification of SGD, is exactly like momentum except we replace \\\\beta in eq. 1 to \\\\nu*\\\\beta. Without any analysis, I am not sure how such a change leads to dramatic difference in performance like the first subfigure in Fig. 2. The authors say that the performance of SGD was similar to that of momentum, but performance of momentum with \\\\beta = 0.7*0.999 should be the same as that of QHM. So where is the gain coming from? What am I missing here? Outside of that, the results are impressive and the simplicity of the method quite appealing. The authors put in substantial efforts to run a large number of experiments and providing a lot of extra material in the appendix for those looking to dive into all the details which is appreciated. \\n\\n\\nIn summary, there are a few results that I don\\u2019t quite follow, but the rest of the paper is well organized and the method shows promise in practice. My only concern is the incremental nature of the method, which is only partly offset by the good presentation.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The authors introduce a class of quasi-hyperbolic algorithms that mix SGD with SGDM (or similar with Adam) and show improved empirical results. They also prove theoretical convergence of the methods and motivate the design well. The paper is well-written and contained the necessary references. Although I did feel that the authors could have better compared their method against the recent AggMom (Aggregated Momentum: Stability Through Passive Damping by Lucas et al.). Seems like there are a few similarities there.\\n\\nI enjoyed reading this paper and endorse it for acceptance. The theoretical results presented and easy to follow and state the assumptions clearly. I appreciated the fact that the authors aimed to keep the paper self-contained in its theory. The numerical experiments are thorough and fair. The authors test the algorithms on an extremely wide set of problems ranging from image recognition (including CIFAR and ImageNet), natural language processing (including the state-of-the-art machine translation model), and reinforcement learning (including MuJoCo). I have not seen such a wide comparison in any paper proposing training algorithms before. Further, the numerical experiments are well-designed and also fair. The hyperparameters are chosen carefully, and both training and validation errors are presented. I also appreciate that the authors made the code available during the reviewing phase. Out of curiosity, I ran the code on some of my workflows and found that there was some improvement in performance as well.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BkeUasA5YQ | LIT: Block-wise Intermediate Representation Training for Model Compression | [
"Animesh Koratana*",
"Daniel Kang*",
"Peter Bailis",
"Matei Zaharia"
] | Knowledge distillation (KD) is a popular method for reducing the computational over-
head of deep network inference, in which the output of a teacher model is used to train
a smaller, faster student model. Hint training (i.e., FitNets) extends KD by regressing a
student model’s intermediate representation to a teacher model’s intermediate representa-
tion. In this work, we introduce bLock-wise Intermediate representation Training (LIT),
a novel model compression technique that extends the use of intermediate represen-
tations in deep network compression, outperforming KD and hint training. LIT has two
key ideas: 1) LIT trains a student of the same width (but shallower depth) as the teacher
by directly comparing the intermediate representations, and 2) LIT uses the intermediate
representation from the previous block in the teacher model as an input to the current stu-
dent block during training, avoiding unstable intermediate representations in the student
network. We show that LIT provides substantial reductions in network depth without
loss in accuracy — for example, LIT can compress a ResNeXt-110 to a ResNeXt-20
(5.5×) on CIFAR10 and a VDCNN-29 to a VDCNN-9 (3.2×) on Amazon Reviews
without loss in accuracy, outperforming KD and hint training in network size at a given
accuracy. We also show that applying LIT to identical student/teacher architectures
increases the accuracy of the student model above the teacher model, outperforming the
recently-proposed Born Again Networks procedure on ResNet, ResNeXt, and VDCNN.
Finally, we show that LIT can effectively compress GAN generators. | [
"lit",
"teacher model",
"accuracy",
"intermediate representation training",
"student model",
"hint training",
"intermediate representation",
"intermediate",
"loss"
] | https://openreview.net/pdf?id=BkeUasA5YQ | https://openreview.net/forum?id=BkeUasA5YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hke_ah3ZeV",
"S1g1Bku7JN",
"rkgK05M9A7",
"H1ldq-LwRm",
"B1g01-Lw07",
"SJeoktdZp7",
"rylNtOuWTm",
"Hyg9X__b67",
"ryerNNfWam",
"Byxs1hHsnm",
"S1e_k8xc3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544830143872,
1543892791386,
1543281361061,
1543098768507,
1543098598400,
1541667043201,
1541666940473,
1541666850355,
1541641261446,
1541262307392,
1541174751565
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper804/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper804/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/Authors"
],
[
"ICLR.cc/2019/Conference/Paper804/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper804/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper804/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors propose a method for distilling a student network from a teacher network and while additionally constraining the intermediate representations from the student to match those of the teacher, where the student has the same width, but less depth than the teacher. The main novelty of the work is to use the intermediate representation from the teacher as an input to the student network, and the experimental comparison of the approach against previous work.\\n\\n The reviewers noted that the method is simple to implement, and the paper is clearly written and easy to follow. The reviewers raised some concerns, most notably that the authors were using validation accuracy to measure performance, and were thus potentially overfitting to the test data, and regarding the novelty of the work. Some of the criticisms were subsequently amended in the revised version where results were reported on a test set (the conclusions are as before). Overall, the scores for this paper were close to the threshold for acceptance, and while it was a tough decision, the AC ultimately felt that the overall novelty of the work was slightly below the acceptance bar.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Modifies knowledge distillation by training student to match teachers intermediate representation at multiple layers.\"}",
"{\"title\": \"score revision\", \"comment\": \"Thanks for the update. The revised paper reads much better than the original submission. I wish there could be more analyses as to why/how LIT works. I have revised my score accordingly.\"}",
"{\"title\": \"CIFAR100 test accuracy results\", \"comment\": \"We have updated our manuscript with test accuracy results for CIFAR100 (Figure 2c, 2d). As shown, the results have not significantly changed, and LIT outperforms all baselines.\\n\\nBest,\\nLIT team\"}",
"{\"title\": \"Statistical significance\", \"comment\": \"We have conducted a statistical test for significance for the differences throughout the paper (except table 3, see below). Our conclusions are now supported with p-values in the updated manuscript.\\n\\nAll the differences that we have measured in table 3 are significant. We are in the process of running multiple trials of KD and will update the manuscript when it has finished.\"}",
"{\"title\": \"Further results with test accuracy\", \"comment\": \"Dear reviewers,\\n\\nWe have updated our manuscript with test accuracy for CIFAR10 (Figure 2a, 2b). We are in the process of running the other experiments for test accuracy and they will be complete by the camera-ready due date. As shown, the results do not significantly differ when we hyperparameter tune on a validation set and test on a separate test set.\\n\\nBest,\\nLIT team\"}",
"{\"title\": \"Thank you for your review; initial response\", \"comment\": \"Thank you for the thoughtful review. We have responded to your comments inline. We have improved the manuscript based on your feedback. Several experiments are in progress and we will update the manuscript upon completion.\\n\\n1. Hyperparameter tuning\\n\\nThe hyperparameters of alpha and tau are directly taken from KD. The only hyperparameter LIT introduces is beta. We are in the process of updating our results when using separate validation set for hyperparameter selection and test set (see the response to reviewer number 2). Our initial results show that LIT outperforms KD and training from scratch by the same margins.\\n\\n\\n2. Further analysis of training errors\\n\\nThank you for the suggestion. We are in the process of conducting this analysis and will respond once we have completed this analysis.\\n\\n\\n3. Differences in table 3 are small\\n\\nWe are in the process of running the training procedure multiple times and will perform a statistical test upon completion. We will update the manuscript once the analysis has completed. However, the trend of LIT outperforming KD is consistent across architectures (ResNet, ResNeXt, VDCNN), datasets (CIFAR10, CIFAR100, Amazon Reviews), and tasks (image classification, sentiment analysis). Additionally, a 0.5% increase in accuracy corresponds to nearly doubling the depth of the network and corresponds to a 7% reduction in error.\\n\\n\\n4. Pruning method for LIT.\\n\\nWe used standard pruning proposed by Han et al. 2015 (https://arxiv.org/abs/1506.02626), in which small weights are iteratively removed and the network is retrained at each step. We have updated the manuscript to reflect this.\\n\\n\\n5. LIT vs pruning\\n\\nThank you for the comment. We have updated the manuscript to avoid overclaiming. Additionally, pruning typically requires new hardware for improved inference throughput, whereas LIT does not.\\n\\n\\n6. Statistical significance of different loss functions.\\n\\nWe are in the process of running the training procedure multiple times and will perform a statistical test upon completion. Once we have the results, we will update the manuscript.\"}",
"{\"title\": \"Thank you for the review; initial response\", \"comment\": \"Thank you for the thoughtful review. We have responded to your comments inline. We have improved the manuscript based on your feedback. Our experiments using a test dataset are in progress and we will update the manuscript upon completion.\\n\\n1. Validation accuracy is used as the performance metric, which might be over-tuned. How is the performance on testing datasets?\\n\\nThank you for your thoughtful question. We agree with your point that validation accuracy may be over-tuned. We have started to run experiments with a separate test set, which will take some time due to our limited computational resources. We have initial results for ResNet on CIFAR10, which also show that LIT outperforms training from scratch, and KD. The results are essentially the same as the results currently in the manuscript. For ResNet-110 -> ResNet-20 we found that:\\n- LIT achieves 93.19%,\\n- KD achieves 92.68%,\\n- Training from scratch achieves 91.68%\\nOnce we have completed the rest of the results, we will update the manuscript with test accuracy.\\n\\nWe note that the majority of compression papers (including Han et al. 2015 and Hubera et al. 2016, Li et al. 2017 mentioned below, Furlanello et al. 2018, etc.) and the original ResNet and ResNeXt papers use validation accuracy as their primary metric. Additionally, Li et al. 2017 and Conneau et al. 2017 (the original VDCNN paper) refer to validation accuracy as \\u201ctest accuracy.\\u201d To ensure LIT can be compared against other methods, we will also report validation accuracy, as using a separate test set requires using a different set of data.\\n\\n\\n2. The writing and organization of the paper need some improvement, especially the experiments section.\\n\\nWe have improved the presentation of the experiments section by removing some redundancy, pointing to the appendix for further experimental details, and adding details for which datasets were used. Are there other points we should address?\\n\\n\\n3. The compression ratio (3-5) is not very impressive compared with other compression techniques with pruning and quantization techniques, such as Han et al. 2015, Hubara et al. 2016.\\n\\nBoth Han et al. 2015 and Hubara et al. 2016 test on older networks (e.g., VGG) where the majority of the weights are in the fully connected layers. Compressing the FC layer can achieve up to ~10x compression, while compressing the convolutional layers achieves around ~1.14x compression. As the majority of weights for these older networks are in the FC layer, this achieves high compression rates for these networks.\\n\\nCompressing modern networks is significantly harder. For example, Li et al. 2017 (https://arxiv.org/pdf/1608.08710.pdf) only achieves ~1.6x compression on ResNet (which achieves significantly higher accuracy than VGG). We believe our results should be compared against other methods for compressing _modern_ networks. We have made this point more clear in the paper.\\n\\nAdditionally, in this work, we focus on compression techniques that can improve inference throughput on existing hardware. Pruning and quantization generally require special hardware (e.g., Han et al. 2016\\u2019s EIE https://arxiv.org/abs/1602.01528) for inference improvements.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for the thoughtful review. We have responded to your comments inline. We have improved the manuscript based on your feedback.\\n\\n1. Compare LIT to removing small weight norm parts of networks.\\n\\nLi et al. 2017 (https://arxiv.org/pdf/1608.08710.pdf) removes small norm filters from networks, including ResNets. They achieve ~1.6x compression for the same ResNets we use in this paper, which significantly underperforms LIT. We have added this reference to the paper.\\n\\n\\n2. Clarifying the training procedure.\\n\\nLIT trains with the combined loss for some number of epochs. Then, LIT trains with just the KD loss after that. We have clarified this in the manuscript.\\n\\n\\n3. Typo in Eq. 2.\\n\\nWe have fixed the typo.\\n\\n\\n4. Changing sections to stages. \\n\\nThank you for pointing out the standard terminology. We have updated sections to stages in the manuscript.\\n\\n\\n5. ImageNet models\\n\\nUnfortunately training ImageNet models and hyperparameter tuning alpha and beta are computationally expensive. We are currently running these experiments, but they may not complete by the revision close period.\\n\\n\\n6. Hint training in Figure 3.\\n\\nWe were unable to complete hint training experiments in time for the submission, but they have completed. We have added hint training to Figure 3. Briefly, hint training outperforms KD, but underperforms LIT.\\n\\n\\n7. Dataset in Figure 4.\\n\\nFor Figure 4, we used CIFAR10. We have updated the caption to reflect this.\\n\\n\\n8. Choice of IR.\\n\\nWe used the IR after the second stage of the ResNet. We have updated the manuscript to reflect this.\\n\\n\\n9. Overstatements in the paper.\\n\\nWe have fixed these statements in the paper.\\n\\nWe realized the issue for GANs in the paper and conducted the L2 experiment (i.e., KD with a different loss). As we show in the updated paper (Table 2), LIT outperform this procedure.\\n\\n\\n10. References and formatting.\\n\\nWe have added references to the Inception and FID scores. We have additionally fixed the formatting on the last page of citations.\"}",
"{\"title\": \"cute idea but need more analysis\", \"review\": \"This paper proposes a new approach to compress neural networks by training the student's intermediate representation to match the teacher's.\\n\\nThe paper is easy to follow. The idea is simple. The motivation and contribution are clear. The experiments are comprehensive.\\n\\nOne advantage of the proposed approach that the authors did not mention is that LIT without KD can be optimized in parallel, though I'm not sure how useful this is.\\n\\nOne major weakness of the paper is how the hyperparameters, such as the number of layers, the alpha, beta, tau, and so on, are tuned. It is not clear from the paper that there is a separate development set for tuning these values. If the hyperparameters are tuned on the test set, then it is not surprising LIT works better.\", \"here_are_some_minor_questions\": \"p.5\\n\\nLIT outperforms KD and hint training on all settings.\\n--> what are the training errors (cross entropy) for LIT, KD and hint training? what about the KD objectives (on the training set) of the model trained with LIT and the one trained with KD? this might tell us why LIT is better than the two.\\n\\nLIT outperforms the recently proposed Born Again procedure ...\\n--> what are the training errors (cross entropy) before and after the born again procedure? this might help us understand why LIT is better.\\n\\nKD degrades the accuracy of student models when the teacher model is the same architecture\\n--> again, the training errors (cross entropy) might be able to help us understand what is going on.\\n\\np.7\\n\\nAs shown in Table 3, none of the three variants are as effective as LIT or KD.\\n--> is this claim statistically significant? some of the differences are very small.\\n\\nWe additionally pruned ResNets trained from scratch.\\n--> what pruning method is being used?\\n\\nAs shown in Figure 6., LIT models are pareto optimal in accuracy vs model size.\\n--> this is a very strong claim. it's better to say we fail to prune the network with the approach, but we don't know whether there exists another approach that can reduce the network size while maintaining accuracy.\\n\\nAs shown, L2 and L1 do not significantly differ, but smoothed L1 degrades accuracy.\\n--> is this claim statistically significant?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A novel approach for compressing deep learning models\", \"review\": \"This paper proposes to compress the model by depth. It uses hint training and knowledge distillation techniques to compress a \\\"deep\\\" network block-wisely. It shows a better compression ratio than knowledge distillation or hint training while achieving comparable accuracy performance.\", \"pros\": \"1. This paper considers block-wise compression. For each block, it uses the output of the teacher's last layer as input during training, which improves the learnability of the student models. \\n2. The experiments include a large range of tasks, e.g., image classification, sentiment analysis and GAN.\", \"cons\": \"1. Validation accuracy is used as the performance metric, which might be over-tuned. How is the performance on testing datasets?\\n2. The writing and organization of the paper need some improvement, especially the experiments section.\\n3. The compression ratio (3-5) is not very impressive compared with other compression techniques with pruning and quantization techniques, such as Han et al. 2015, Hubara et al. 2016.\\n\\nIn summary, I think this is an interesting approach to compress deep learning models. But I think the comparisons should be done in terms of testing accuracy. Otherwise, it is hard to judge the performance of this approach. \\n\\n=== after rebuttal ===\\nThanks for the authors' response. Some of my concerns have been clarified. I increased my rating from 5 to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"paper well presented, experimental validation could be further improved\", \"review\": \"This paper introduces LIT, a network compression framework, which uses multiple intermediate representations from a teacher network to guide the training of a student network. Experiments are designed such that student networks are shallower than teacher networks, while maintaining their width. The method is validated on CIFAR-10 and 100 as well as on Amazon Reviews.\\n\\nThe paper is clearly written and easy to follow. The main novelty of the paper is essentially using the teacher intermediate representations as input to the student network to stabilize the training, and applying the strategy to recent networks and tasks.\\n\\nThe authors claim that they are only concerned with knowledge transfer between layers of the same width, that is teacher and student network been designed (by model construction) to have the same number of downsampling operations, while maintaining the same number of stages (referred to as sections in the paper). However, resnet-based architectures have been shown to perform iterative refinement of their features between downsampling operations (see e.g. https://arxiv.org/pdf/1612.07771.pdf and https://arxiv.org/pdf/1710.04773.pdf ). Moreover, these models were also shown to be good regularizers, since they can reduce their model capacity as needed (see https://arxiv.org/pdf/1804.11332.pdf). Therefore, having experiments skipping stages would be interesting, and may allow to further compress the networks (by skipping layers or stages which do not incorporate much transformation). Following https://arxiv.org/pdf/1804.11332.pdf, for the sake of completeness, it might also be interesting to compare LIT results to the ones obtained by just removing layers in the teacher network which have small weight norms.\\n\\nIn method, the last sentence before \\\"knowledge distillation loss\\\" suggests the training of student networks might not be done end-to-end. Could the authors clarify this?\\nIt seems there might be a typo in the KD loss of \\\"knowledge distillation loss\\\", equation (2). Shouldn't the second term of the equation be a function of p^T and q^T (with temperature)?\\n\\nI would suggest changing \\\"sections\\\" to stages, as previously introduced in https://arxiv.org/pdf/1612.07771.pdf .\\n\\nAs for the experiments, it would be more interesting to see this kind of analysis on ImageNet (pretained resnet models are readily available).\\nFigure 3, why not add hint training as well?\\nFigure 4, what's the dataset used here?\\n\\nIn Section 4.2, it seems that the choice of the IR layer in the analysis could have a significant impact. How was the layer chosen for the ablation study experiments?\", \"there_are_a_few_overstatements_in_the_paper\": \"- page 5, paragraph 2: FitNets proposes a general framework to transfer knowledge from a teacher network to a student network through intermediate layers. Thus, the framework itself does not require the student networks to be deeper and thinner than the teacher network.\\n- page 6, \\\"LIT can compress GANs\\\": authors claim to overcome limitations of KD when it comes to applying knowledge transfer to pixel-wise architecture that do not output distributions. It seems that changing the loss and using a l2 loss instead is a rather minor change, especially since performing knowledge transfer by means of l2 (although at intermediate layers) has already been explored in FitNets.\\n\\nPlease add references for inception and FID scores.\\nPlease fix references format in page 10.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkeL6sCqK7 | REPRESENTATION COMPRESSION AND GENERALIZATION IN DEEP NEURAL NETWORKS | [
"Ravid Shwartz-Ziv",
"Amichai Painsky",
"Naftali Tishby"
] | Understanding the groundbreaking performance of Deep Neural Networks is one
of the greatest challenges to the scientific community today. In this work, we
introduce an information theoretic viewpoint on the behavior of deep networks
optimization processes and their generalization abilities. By studying the Information
Plane, the plane of the mutual information between the input variable and
the desired label, for each hidden layer. Specifically, we show that the training of
the network is characterized by a rapid increase in the mutual information (MI)
between the layers and the target label, followed by a longer decrease in the MI
between the layers and the input variable. Further, we explicitly show that these
two fundamental information-theoretic quantities correspond to the generalization
error of the network, as a result of introducing a new generalization bound that is
exponential in the representation compression. The analysis focuses on typical
patterns of large-scale problems. For this purpose, we introduce a novel analytic
bound on the mutual information between consecutive layers in the network.
An important consequence of our analysis is a super-linear boost in training time
with the number of non-degenerate hidden layers, demonstrating the computational
benefit of the hidden layers. | [
"Deep neural network",
"information theory",
"training dynamics"
] | https://openreview.net/pdf?id=SkeL6sCqK7 | https://openreview.net/forum?id=SkeL6sCqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"G9SX2QT_KG6",
"r1g0JdLXx4",
"S1xlhulPCX",
"Skg-RwX2aX",
"rJx3jDQham",
"rJx9I0BgaQ",
"BkxBcybAh7",
"B1gMg00tn7"
],
"note_type": [
"comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1664006957266,
1544935397943,
1543076007954,
1542367176752,
1542367140092,
1541590610266,
1541439373364,
1541168618084
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper803/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper803/Authors"
],
[
"ICLR.cc/2019/Conference/Paper803/Authors"
],
[
"ICLR.cc/2019/Conference/Paper803/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper803/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper803/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"The above comment posted on 25 Nov 2018 is wrong. I will put a new comment here to correct the mistake.\\n\\nI reviewed the paper, Achile https://arxiv.org/abs/1706.01350, recommending for acceptance of the paper at JMLR. So, I feel obligated to avoid the error in the community here.\", \"the_claim_of_achile_https\": \"//arxiv.org/abs/1706.01350 for the following is invalid: information in the weights bounds information in the activations, which is the form of compression discussed in this paper. Technically, Achile only shows that the negative entropy of the normal distribution bounds the information in the activations. Please see the proof of Proposition 4.1 to understand this. The negative entropy of the normal distribution and the information in the weight are very different, unless we make a strong and impractical assumption as is done in Achile. I still recommended for acceptance since this paper provides different other contributions than this one.\\n\\nNote that the normal distribution here is not even related to any distribution of the dataset or learning algorithm. Instead, the normal distribution is arbitrary, as it corresponds to the noise that authors added to the weights arbitrarily to get this result. So, we can change the entropy arbitrarily without changing the dataset distribution and learning algorithm. So, this cannot have any meaningful relation with information of weights and activations. \\n\\nBut, if we say that \\\"Achile https://arxiv.org/abs/1706.01350 proves that flatness bounds information in the weights, and information in the weights bounds information in the activations, which is the form of compression discussed in this paper\\\", you are claiming that the negative entropy of the normal distribution (with an arbitrary variance) should be somehow a good approximation of the mutual information of weights and dataset and in turn a good bound on the mutual information of representation and input, in general or in practical settings. This is a strange claim and I think nobody will agree with this. \\n\\nSo, for this aspect, Achile https://arxiv.org/abs/1706.01350 is using a bad presentation \\\"trick\\\" to hide the fact that their $\\\\tilde I(w;S)$ is just the entropy of normal distribution H(b) (- a constant) where b is the normal random variable. So, first, $\\\\tilde I(w;S)$ is not $I(w;S)$. Second, $\\\\tilde I(w;S)$ is H(q) (- a constant). So, a better and more honest notation is to replace $\\\\tilde I(w;S)$ with $\\\\tilde H(q)$. Then, you can easily see that there is no technical contribution in this paper that connects information in the weights and information in the activations. But, again it is a good paper providing other contributions, which is why I recommended for acceptance.\", \"title\": \"The claim of the previous work by Achile is invalid in the context of the presence paper\"}",
"{\"metareview\": \"The authors admit the paper \\\"was not written carefully enough and requires major rewriting.\\\" This seems to be a frustratingly common phenomenon with work on the information bottleneck.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs a rewrite\"}",
"{\"comment\": \"The relation between compression (information reduction), flat minima (SGD), and generalization is also described in Achile https://arxiv.org/abs/1706.01350 which proves that flatness bounds information in the weights, and information in the weights bounds information in the activations, which is the form of compression discussed in this paper. That work should be referenced.\", \"title\": \"related work\"}",
"{\"title\": \"Authors' response to the reviewers' comments - part 2\", \"comment\": \"4. This new bound directly leads to the empirical representation compression in the information plane, as reported by Shwartz-Ziv and Tishby for both saturated and ReLU nonlinearities, without any assumption on binning or discretization of the units!\\n\\n5. This result refutes the main claims of Saxe et al: (a) that the observed compression depends on the binning (b) that it results from the saturation of the units and (c) has nothing to do with the stochastic gradients or generalization.\\n\\n6. It also gives the first proof, to our knowledge, that convergence to flat minima improves generalization, as conjectured by many others without any mathematical explanation.\\n\\n7. We finally briefly scratched (due to lack of space) our most striking corollary: due to this diffusion compression, the convergence to good generalization is faster with more hidden layers and the convergence time scales as a negative power of the number of effective layers. We agree that this striking new result is hard to understand from this paper alone and requires a separate publication.\\n\\nReferences - \\n[1]Saxe, A. M., Bansal, Y., Dapello, J., Advani, M., Kolchinsky, A., Tracey, B. D., & Cox, D. D. On the information bottleneck theory of deep learning, ICLR, 2018\\n[2] Tishby, Naftali, and Noga Zaslavsky. \\\"Deep learning and the information bottleneck principle.\\\" Information Theory Workshop (ITW), 2015 IEEE. IEEE, 2015.\\n[3] Shwartz-Ziv, Ravid, and Naftali Tishby. \\\"Opening the black box of deep neural networks via information.\\\" arXiv preprint arXiv:1703.00810 (2017).\\n[4] Qianxiao Li, Cheng Tai, and Weinan E. Stochastic modified equations and adaptive stochastic gradient algorithms. arXiv:1511.06251, 2015.\\n[5] Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate Bayesian inference. The Journal of Machine Learning Research, 18(1):4873\\u20134907, 2017.\\n[6] Chris Junchi Li, Lei Li, Junyang Qian, and Jian-Guo Liu. Batch size matters: A diffusion approximation framework on nonconvex stochastic gradient descent. arXiv:1705.07562v1, 2017\\n[7] Samuel L Smith and Quoc V Le. A Bayesian perspective on generalization and stochastic gradient descent. arXiv:1710.06451, 2018.\\n[5] Pratik Chaudhari and Stefano Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. arXiv:1710.11029, 2017.\\n[8] Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in SGD. arXiv:1711.04623, 2017.\\n[9] Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, and Jinwen Ma. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from minima and regularization effects. arXiv:1803.00195, 2018.\\n[10] Jing An, Jianfeng Lu, and Lexing Ying. Stochastic modified equations for the asynchronous stochastic gradient descent. arXiv:1805.08244, 2018.\"}",
"{\"title\": \"Authors' response to the reviewers' comments - part 1\", \"comment\": \"We thank the reviewers for their comments.\\n\\nWe agree with the reviewers that the submitted paper was not written carefully enough and requires major rewriting.\\n\\nYet, the reviewers, in particular reviewer 1, missed or dismissed our main and new results, which rigorously refutes - one by one - the misleading claims of Saxe et.al. [1].\\n\\nThe Information Bottleneck theory of Deep Learning [2-3] has received significant attention in the past year, as can be seen from the number of related, or inspired by, submissions to this conference alone. This is despite the fact that the theory was not properly and correctly described anywhere (certainly not by Saxe et al 2017 despite their title). Most of this impact is due to Tishby\\u2019s presentations and online talks. This is the reason we found it necessary to first review some of its basic claims. This review was obviously too long for this paper as the really new results were squeezed into the last pages.\", \"our_main_novel_results_are_summarized_below\": \"1. We provide a rigorous proof ( Thm. 2) that the mutual information between successive layers decreases during the diffusion phase of the SGD training - for any nonlinearity of the units, saturated, linear, or piecewise linear as ReLU. \\n\\n2. The only important assumption in our proof is that there is a distinct diffusion phase in the SGD training, as reported and well established by many others [5-10]. This phenomenon is related to the convergence to \\u201ca flat minimum\\u201d of the training error. We also assume that the mini-batches are statistically independent and that the layers are sufficiently wide to justify our usage of the central limit theorem for the diffusion weights. All other assumptions are standard technical conditions which are met with probability 1 in standard deep learning. Our results do not rely in any way on continuous time SGD, nor on the assumption that the gradient fluctuations are Gaussian - these requirements are clearly confusing and irrelevant. The continuous time approximation to SGD is in fact justified in [4], but is not essential to our analysis in this paper.\\n\\n3. To demonstrate this result, numerical simulations in this paper have been done with ResNets with RelU nonlinearities, as explicitly stated in the paper - in contrast to the claim of reviewer 2.\"}",
"{\"title\": \"Interesting, but hard to interpret the technical results.\", \"review\": \"This paper presents some results about the information bottleneck view of generalization in deep learning studied in recent work by Tishby et al.\\nSpecifically this line of work seeks to understand the dynamics of stochastic gradient descent using information theory. In particular, it quantifies the mutual information between successive layers of a neural network. Minimizing mutual information subject to empirical accuracy intuitively corresponds to compression of the input and removal of superfluous information.\\nThis paper further formalizes some of these intuitive ideas. In particular, it gives a variance/generalization bound in terms of mutual information and it proves an asymptotic upper bound on mutual information for the dynamics of SGD.\\n\\nI think this is an intriguing line of work and this paper makes an meaningful contribution to it. The paper is generally well-written (modulo some typos), but it jumps into the technical details (stochastic calculus!) without giving much intuition to help digest the results or discussion of how they relate to the broader picture. (Although I appreciate the difficulty of working with a page limit.) \\n\\nTypos, etc.:\\np1. \\\"ereas\\\" should be \\\"whereas\\\"\\np2. double comma preceeding \\\"the weights are fixed realizations\\\"\\np5. extra of in \\\"needed to represent of the data\\\"\\nThm 1. L(T_m) has not been formally defined when T_m contains a set of representations rather than data points.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Similar to previous, fails to mention criticisms of the research program\", \"review\": \"This paper interprets the optimization of deep neural networks in terms of a two phase process: first a drift phase where gradients self average, and second a diffusion phase where the variance is larger than the square of the mean. As argued by first by Tishby and Zaslavsky and then by Shwartz-Ziv and Tishby (arxiv:1703.00810), the first phase corresponds to the hidden layers becoming more informative about the labels, and the second phase corresponds to a compression of the hidden representation keeping the informative content relatively fixed as in the information bottleneck of Tishby, Pereira, and Bialek.\\n\\nA lot of this paper rehashes discussion from the prior work and does not seem sufficiently original. The main contribution seems to be a bound that is supposed to demonstrate representation compression in the diffusion phase. The authors further argue that this shows that adding hidden layers lead to a boosting of convergence time.\\n\\nFurthermore, the analytic bound relies on a number of assumptions that make it difficult to evaluate. One example is using the continuum limit for SGD (1), which is very popular but not necessarily appropriate. (See, e.g., the discussion in section 2.3.3 in arxiv:1810.00004.)\\n\\nAdditionally, there has been extensive discussion in the literature regarding whether the results of Shwartz-Ziv and Tishby (arxiv:1703.00810) hold in general, centering in particular on whether there is a dependence on the choice of the hyperbolic tangent activation function. I find it highly problematic that the authors continue to do all their experiments using the hyperbolic tangent, even though they claim their analytic bounds are supposed to hold for any choice of activation. If the bound is general, why not include experimental results showing that claim? The lack of discussion of this point and the omission of such experiments is highly suspicious.\\n\\nPerhaps more importantly, the authors do not even mention or address this contention or even cite this Saxe et al. paper (https://openreview.net/forum?id=ry_WPG-A-) that brings up this point. They also cite Gabrie et al. (arxiv:1805:09785) as promising work about computing mutual information for deep networks, while my interpretation of that work was pointing out that such methods are highly dependent on choices of binning or regulating continuous variables when computing mutual informations. In fact, I don't see any discussion at all this discretization problem, when it seems absolutely central to understanding whether there is a sensible interpretation of these results or not.\\n\\nFor all these reasons, I don't see how this paper can be published in its present form.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"It is a paper written in a rush that its clarity is a main problem.\", \"review\": \"The authors are providing an information theoretic viewpoint on the behavior of DNN based on the information bottleneck. The clarity of the paper is my main concern. It contains quite a number of typos and errors. For example, in section 6, the results of MNIST in the first experiment was presented after introducing the second experiment. Also, the results shown in Fig 1b seems to have nothing to do with Fig. 1a. It makes use of some existing results from other literature but it is not clearly explained how and why the results are being used. It might be a very good paper if the writing could be improved. The paper also contains some experimental results. But they are too brief and I do not consider the experiments as sufficient to justify the correctness of the bounds proved in the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HyxBpoR5tm | Adversarially Robust Training through Structured Gradient Regularization | [
"Kevin Roth",
"Aurelien Lucchi",
"Sebastian Nowozin",
"Thomas Hofmann"
] | We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations. Our regularizer can be derived as a controlled approximation from first principles, leveraging the fundamental link between training with noise and regularization. It adds very little computational overhead during learning and is simple to implement generically in standard deep learning frameworks. Our experiments provide strong evidence that structured gradient regularization can act as an effective first line of defense against attacks based on long-range correlated signal corruptions. | [
"Adversarial Training",
"Gradient Regularization",
"Deep Learning"
] | https://openreview.net/pdf?id=HyxBpoR5tm | https://openreview.net/forum?id=HyxBpoR5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1eV0Vn-lV",
"rJem8EPbxE",
"BylCiAXq0Q",
"S1gZPPycRX",
"B1goHwJqR7",
"S1ew11U4RQ",
"B1lhp0BVCm",
"BkxyyjB40m",
"BJe7jcB4Am",
"HJlYFYr4CQ",
"rJgEwtBNC7",
"HklHzUwsn7",
"HylEkYgY3Q",
"H1eTf52unm",
"rJgQCEy8cQ",
"Bkl2bNy8q7",
"H1e-Zmy85X",
"SkgpUeJI5Q",
"S1l41x189Q",
"B1xN-wnf5X",
"rylfcB3MqX",
"SkexZVhMcQ",
"BJgbuknzqm",
"ryxGk0jM9Q"
],
"note_type": [
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"comment"
],
"note_created": [
1544828108489,
1544807498981,
1543286437925,
1543268184996,
1543268163255,
1542901470588,
1542901443873,
1542900438820,
1542900378607,
1542900097006,
1542900060323,
1541269004548,
1541109979860,
1541093909083,
1538811082712,
1538810883917,
1538810616548,
1538809940712,
1538809820320,
1538602747826,
1538602377968,
1538601975872,
1538600808613,
1538600409765
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper802/Area_Chair1"
],
[
"~Yifei_Wang1"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper802/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"ICLR.cc/2019/Conference/Paper802/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers are in a consensus and recommended to reject after engaging with the authors. Further, many additional questions raised in the discussion should be addressed in the submission to improve clarity. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper decision\"}",
"{\"comment\": \"I think the Structured Gradient Regularization you propose is very very similar to the classical **natural gradient** or the closely related **Gauss-Newton** method. In natural gradient they also approximate the deviation constraint by second order Taylor expansion (also drop higher order term in Hessian), resulting in the Fisher Information Matrix (FIM). The FIM term is then added back to the objective as a penalty, which is the same 'data-dependent' or 'structured' regularization in your paper. Indeed FIM could be seen as a local metric in the Riemann Manifold defined by current position so it's data-dependent. The only difference may lie in that, the natural gradient can have a closed form solution while you still utilize gradient descent (which solves the linearised objective), but this is no big deal.\", \"title\": \"Connection to natural gradient?\"}",
"{\"title\": \"Thank you for the clarifications\", \"comment\": \"I thank the authors for their clarifications and apologize for my delayed response.\\n\\n- Merits of structured gradient regularization -\\nI agree that SGR has some potential conceptual merits, but the studies in this paper are not yet sufficient to demonstrate that these merits translate into practice. More broadly, I believe that regularization approaches for robust learning could indeed have many benefits, in terms of - (1) improving the generalization performance (2) offering a computationally less expensive alternative for adversarial training (by not requiring adversarial examples to be computed by an involved process like PGD) or (3) lower the sample complexity required in robust learning. However, the paper, in its current version does not provide convincing evidence on any of these fronts.\\n\\n- Combining SGR with adversarial training -\\nI think this investigation is important to establish the merits of this approach, in the light of the other empirical results. In particular, I believe it would be really valuable if the generalization gap observed between train and test adversarial accuracies with adversarial training is decreased when you train with adversarial training + SGR.\\n\\n- Covariance function -\\nI thank the authors for this clarification. \\n\\n- Attack accuracy vs area under the attack curve -\\nI agree that reporting AUC may have merits as an evaluation approach. However, as this is not standard in the robustness literature, I think it is essential for the authors to also include the results without averaging to make it easier to evaluate in the light of prior work.\\n\\n- Cancellation of Laplacian terms -\\nI thank the authors for the clarification. But I do not agree that these properties that hold for training with *white* noise or in the standard setting can be claimed (without further analysis) to hold in the adversarial setting.\\n\\n- Long-range correlated noise attack -\\nI think the idea of investigating the structure of attacks proposed in this paper is interesting. But it warrants further exploration. For instance, I would like to see how state-of-the-art robust models do wrt these LRC attacks. I also agree with AnonReviewer2\\u2019s comments that an investigation on the relationship between the structure of attacks and robustness warrants a deeper theoretical and empirical investigation.\\n\\n- Decay length approaching zero -\\nThank you for clarifying.\\n\\n- SGR/GN white-box and transfer attack accuracies -\\nI thank the authors for the clarification, but I am still not convinced by these results. As I mentioned in my response to \\u201c- Merits of structured gradient regularization -\\u201d above, I think there are multiple avenues to demonstrate the merits of SGR as a defense (if it does match SOTA approaches currently), but I do not think they have been sufficiently demonstrated in this paper.\\n\\n- Evidence that SGR reduces overfitting -\\nCould the authors include these results in the manuscript?\\n\\nI think this paper tackles an important question and raises some interesting points (about the relationship between the structure of adversarial perturbations and robustness). However these have not been sufficiently explored in the paper and I find the empirical investigation lacking.\"}",
"{\"title\": \"Thank you for clarifications II\", \"comment\": \"5) Thank you for clarifying the DeepFool column. Though I don't have any immediate suggestions, it seems that this has been a source of confusion to other readers and should probably be addressed. Maybe further explanation in the text?\\n\\nI largely agree with your comments on AUC - perhaps this would be a better measure. However, I still believe that this makes comparing to existing work more difficult. Perhaps Figure 4 could be produced for a few models and pointed to in the main text (so that the new figures remain in the appendix, if you prefer).\\n\\nI don't see statistical parity with adversarial training as especially exciting - especially as robust optimization adversarial training is not included [4]. My biggest concern with the work still lies in the soundness of the empirical study. I do not feel that sufficient evidence has been provided to recommend decay length of signal corruptions as a good measure of robustness (or attack strength) but there are some interesting findings here that I would like to see explored further. I am also unconvinced by the results presented for SGR, in particular that it does not seem to offer any advantage over GN regularization.\", \"response_to_minor_comment\": \"> Even if the covariance structure is computed from one single example, the SGR regularized classifier is only ever evaluated on the clean input, i.e. adversarial perturbations are never fed to the classifier. It thus seems impossible that the classifier performs better on perturbed examples than on clean inputs and in practice we also did not observe this.\\n\\nThis seems like a subtle point. The classifier is used to produce the adversarial perturbations which build the covariance matrix. The computation graph is then \\\"broken\\\" so that no gradient is passed through the network using these perturbations, but the covariance matrix is used as a regularization term. From comment (3) above it feels that in some special cases this may end up looking very similar to existing approaches that use gradient smoothing/adversarial training (minus the covariance running average). In summary, it still isn't obvious to me that overfitting is impossible. If you only learn the covariance structure of single step gradient attacks local to each traning datapoint how can you argue generalization to new attacks (higher order, new threat models e.g. L2 vs L infinity, decision based attacks, transfer attacks) on test data?\", \"short_summary\": \"There are some interesting parts to this work but I feel that there is insufficient evidence to support these. My issue still lies mostly with the empirical evaluation.\\n\\n[1] Simon-Gabriel et al. \\\"Adversarial vulnerability of neural networks increases with input dimension\\\" https://arxiv.org/pdf/1802.01421.pdf\\n[2] Miyato et al. \\\"Virtual adversarial training: A regularization method for supervised and semi-supervised learning\\\" https://arxiv.org/abs/1704.03976\\n[3] Tsipras et al. \\\"There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits\\\" https://arxiv.org/abs/1805.12152v2 \\n[4] Madry et al. \\\"Towards deep learning models resistant to adversarial attacks\\\" https://arxiv.org/abs/1706.06083\"}",
"{\"title\": \"Thank you for clarifications I\", \"comment\": \"Thank you for your detailed response and my apologies for writing my own later than should be acceptable.\\n\\n1) I don't see this as an especially pivotal part of my review. My point was that computing the quadratic form required for the SGR regularizer may be more computationally efficient if you take advantage of the structure of the covariance matrix - avoiding computing it directly. When using the running average I don't see how this could be easily achieved.\\n\\n2) Thank you for clarifying. Unfortunately, this still seems like quite a weak argument to me (though intuitively it makes sense). Is there anything from the regularization literature that you cite which may help to justify this technique preventing overfitting? Is this something that you could justify empirically? I know that we are past the revision date at this point so I want ensure you that this is not a critical part of my review but is something I would be interested to hear your thoughts on.\\n\\n3) I think this is an important point and one that does require further thinking. It seems to me that in some cases the SGR algorithm will reduce to a small-epsilon form of adversarial training. In this case - what is special about SGR that has it outperform adversarial training? See [1,2] for some description of how adversarial training can be intepreted as gradient smoothing. This is fairly easy to see by looking at f(x+d) - f(x), for d given by e.g. single gradient steps in the limit of small perturbations.\\n\\n4) Thank you for clarifying. I still don't see this explanation in the main paper (or an accompanying citation). Am I missing this? I think it would be reasonable to include some description for those of us who aren't overly familiar with this terminology.\\n\\nI still want to put emphasis behind a comment from my initial review. You hypothesise that putting too much emphasis on short-range correlations leads to vulnerability. But you do not test the more interesting converse at all: reducing dependence on short-range correlations improves robustness. Nor do you show that SGR is able to reduce dependence on short-range correlations. You raise two points here (denoted i and ii) which seem interesting and important to me. If (i) holds, then does this indicate that a stronger attack may work even better against the model? If not, then why is decay-length still a meaningful indicator of robustness? I think that (ii) seems potentially more interesting, perturbing the low-frequency features (if I understand correctly) would have some effect on the semantic meaning of the perturbation - similar to that observed in adversarial training. [3]\\n\\nTo me, this is an important part of the theoretical discussion in this paper but it is underexplored - both empirically and analytically. I acknowledge that there may be difficulty when disentangling the covariance structure due to the model and attack but if this is the case then it seems unreasonable to conclude that dependence on short-range correlations => vulnerability.\"}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 2)\", \"comment\": \"- SGR/GN white-box and transfer attack accuracies -\\nAs stated in Section 4.4, SGR/GN trained models achieve white-box attack accuracies that are intermediate between those of the clean model and adversarially trained models. We would like to note, however, that we do not equate \\u201crobustness\\u201d with \\u201cwhite-box attack accuracy\\u201d. If we look at the transfer-attack accuracies (bold-face numbers), then SGR and GN trained models are statistically on par with adversarially trained models. \\n\\nWe would also like to add that the PGD white-box attack accuracies reported for SGR and GN trained models are within one standard deviation of each other, which is 0.5 % (computed over 10 runs). We can therefore only conclude that SGR and GN trained models achieve statistically indistinguishable accuracies.\\n\\n- Transfer attack strength -\\nThis could in part be due to PGD adversarial training resulting in adversarial perturbations that become easier to classify instead of the classifier actually becoming more robust. See for instance [Athalye et al. Obfuscated gradients give a false sense of security, 2018.] or [Galloway et al., Adversarial training versus weight decay, 2018].\\n\\n- Are gradient regularization based defenses only giving a very local picture of the landscape? -\\nNot necessarily. If adversarial vulnerability is an intrinsic property of the network, regularization as well as other adversarial robustification methods might remedy this vulnerability without having to search for adversarial perturbations in a certain neighborhood around each data point in the first place. \\n\\n- Evidence that SGR reduces overfitting -\\nWe did compute numbers for the training accuracy - test accuracy generalization gap for the various training methods considered in our paper. What we see is that clean, PGD and FGSM trained models have generalization gaps of around 10-14% whereas GN and SGR trained models have generalization gaps of around 5-7%.\"}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 1)\", \"comment\": \"We would like to thank the reviewer for his/her valuable feedback.\\n\\n- Merits of structured gradient regularization -\", \"sgr_has_several_conceptual_merits\": \"Firstly, one of the main contributions of our work is to ** derive structured gradient regularization ** as a tractable approximation to training with correlated perturbations. SGR is a generalization of gradient norm (GN) regularization: while GN provides an approximation to training with white noise, SGR provides an approximation to training with arbitrarily correlated noise. This is in line with a large body of work on the equivalence between regularization and robust optimization. See our reply titled \\\"Our work is a strict generalization of previous work and regularization was proven to be equivalent to robust optimization in certain settings.\\\" for a list of references. \\n\\nSecondly, while robust optimization aims at approximating the worst-case distribution, we propose to efficiently approximate expectations over corrupted distributions through structure-informed regularization. Conceptually, rather than perturbing each data point individually, our starting point is to learn a corruption model, i.e. to use a generative mechanism to learn adversarial perturbations from examples. In practice, we propose to approximate such a corruption model by adaptively learning the structure of adversarial perturbations.\\n\\nThirdly, SGR can leverage the fact that adversarial examples might live in low-dimensional subspaces. Quoting from [Moosavi-Dezfooli et al, \\u201cUniversal adversarial perturbations\\u201d, 2017]: \\u201cWe hypothesize that the existence of universal perturbations fooling most natural images is partly due to the existence of such a low-dimensional subspace that captures the correlations among different regions of the decision boundary.\\u201d SGR can leverage this by penalizing gradients that lie within such a subspace.\\n\\n- Combining SGR with adversarial training -\\nIt has certainly occurred to us to combine SGR with adversarial training. However, in the interest of transparency, we believe it is more clear to benchmark and compare regularization and adversarial training individually. Nevertheless, we will investigate combining them.\\n\\n- Covariance function -\\nThe covariance function is just a simple parametrization of the covariance matrix in terms of the displacement between pixels, as is well-known in computer vision. We apologize for omitting to specify that the PGD attack was L_infty constrained. \\n\\n- Attack accuracy vs area under the attack curve -\\nReporting area under the attack curve serves two purposes. Firstly, it addresses the potential danger of overfitting to a specific attack epsilon. Secondly, it mimics the realistic scenario in which the attacker tries to fool the classifier with as small a perturbation as possible. That said, we believe that an even more realistic performance measure would give less weight to larger perturbations that are easier to detect and give relatively more weight to smaller ones that are harder to detect. (Note, the numbers we currently report give equal weight to different perturbation strengths.)\\n\\n- Cancellation of Laplacian terms -\\nThe underlying assumption is that Eq. (10) and Eq. (5) coincide to order O(||\\\\xi ||^3) at the Bayes optimum, which is within the precision to which we truncate. This assumption is rather common in the literature, see e.g. [Bishop. Training with noise is equivalent to tikhonov regularization., 1995] or [An, G. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance. 1996]. Alternatively, Eq. (10) can also be seen as a Levenberg-Marquart approximation of Eq. (5), if one does not want to invoke the Bayes optimality argument, see Section 5.4.1 in Bishop\\u2019s Pattern Recognition and Machine Learning book.\\n\\n- Long-range correlated noise attack -\\nWe do not claim that the LRC attack can break existing methods. The purpose of the LRC attack experiment is solely to establish whether there is a potential benefit in using a structured covariance matrix in the SGR regularizer versus using an \\u201cunstructured\\u201d diagonal covariance (corresponding to gradient-norm regularization) in the presence of long-range correlated noise. In other words, this experiment simply tests whether the SGR regularizer extracts useful information about the long-range correlation structure of the perturbations, which it indeed does. \\n\\n- Decay length approaching zero -\", \"the_quoted_statement_is_indeed_trivial\": \"if SGR is trained from scratch with a covariance matrix that is close to the identity matrix (i.e. the covariance matrix has a decay length close to zero), its performance will be similar to that of GN, as shown in Figure 3. Note, that each data point in Figure 3 corresponds to (an average of five) networks that have been trained from scratch with a covariance matrix of the given decay length.\"}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 2)\", \"comment\": \"6) White-box and transfer attack accuracy results (II)\\nAs stated in Section 4.4, SGR/GN trained models achieve white-box attack accuracies that are intermediate between those of the clean model and adversarially trained models. We would like to note, however, that we do not equate \\u201crobustness\\u201d with \\u201cwhite-box attack accuracy\\u201d. If we look at the transfer-attack accuracies (bold-face numbers), then SGR and GN trained models are statistically on par with adversarially trained models. \\n\\nWe would also like to add that the PGD white-box attack accuracies reported for SGR and GN trained models are within one standard deviation of each other, which is 0.5 % (computed over 10 runs). We can therefore only conclude that SGR and GN trained models achieve statistically indistinguishable accuracies.\", \"minor_comments\": [\"The purpose of data augmentation is to induce invariance of the output (i.e. the classifier predictions) w.r.t. a set of input transformations. A robust classifier should - to some extent - also be invariant to adversarial examples.\", \"Section 7.1 should start without the (iii) typo.\", \"Even if the covariance structure is computed from one single example, the SGR regularized classifier is only ever evaluated on the clean input, i.e. adversarial perturbations are never fed to the classifier. It thus seems impossible that the classifier performs better on perturbed examples than on clean inputs and in practice we also did not observe this.\"]}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 1)\", \"comment\": \"We would like to thank the reviewer for his/her valuable feedback.\\n\\n1) Hutchinson trace estimation trick\\nThe Hutchinson trace estimation trick doesn\\u2019t seem to be relevant for our regularizer: we are not primarily concerned with the problem of estimating the trace of the covariance matrix, but we are rather interested in leveraging the sparseness of the covariance-gradient matrix-vector product. Irrespective of that, we can already efficiently aggregate batch estimates for the covariance structure in our regularizer, as the input gradient of the per-sample cross-entropy loss is often available as a highly optimized callable operation in modern deep learning frameworks. Nevertheless, it is an interesting suggestion which we would be happy to investigate further.\\n\\n2) What is the purpose of the running average in the covariance?\\nThe decay rate \\u03b2 allows us to trade off weighting between current (\\u03b2 \\u2192 1) and past (\\u03b2 \\u2192 0) batch averages. The idea of using smaller decay rates is that this should avoid overfitting to a specific attack: the more of the history we take into account (i.e. the more momentum), the less likely the model is to overfit on specific perturbations. Our choice of \\u03b2=0.1 was inspired by momentum-based adaptive optimization algorithms like Adam, which also by default gives a weight of 0.1 to current gradients and a weight of 0.9 to past gradients. We did not observe a big difference in our experiments for other values of \\u03b2.\\n\\n3) SGR algorithm vs. adversarial training as gradient smoothing\\nOur regularizer is informed by the covariance structure of adversarial perturbations, which for simple perturbations, like FGM, is indeed given by the covariance of the input-output gradient. That said, it seems well worth exploring whether adversarial training can be interpreted as gradient smoothing and how this is connected to SGR regularization.\\n\\n4) Covariance structure of adversarial perturbations and how it might change\\nThe decay length is defined as the displacement over which the covariance function decays to 1/e of its value. The covariance function is just a simple parametrization of the covariance matrix in terms of the displacement between pixels, as is well-known in computer vision. Based on the observation that unregularized/undefended classifiers are vulnerable to short-range structured corruptions, we thus conjecture that they give too much weight to short-range correlations (high-frequency patterns) and not enough weight to long-range ones (globally relevant low-frequency features).\\n\\nThe question of how this structure may change when robustifying the model through adversarial training or SGR regularization is indeed interesting. What makes this analysis complicated, however, is the fact that the ** covariance structure not only depends on the model but also on the attack algorithm **. So, if the model becomes more robust to short-range correlated perturbations, the following two things can happen (potentially both): (i) new perturbations become less effective and thus more random, in which case the decay-length of the covariance function becomes even shorter. Or (ii) the attack will adapt to perturb the long-range (low-frequency) content of the signal, if it is powerful enough. Assessing the covariance function change therefore seems rather non-trivial, as one would need to separate the effect of model robustness from attack algorithm adaptivity/non-adaptivity. We did not observe meaningful changes of the covariance structure in our experiments, which is not a negative result due to the above points however.\\n\\n5) White-box and transfer attack accuracy results\", \"the_deepfool_attack_is_unconstrained\": \"if it is run for sufficiently many iterations, it should always reduce the accuracy of the classifier to below chance. This is why the Fool column in Table 1 reports the magnitudes of the perturbations required to cross the decision boundary (normalized by the magnitude of the unperturbed data point), according to Equation 2 (or its empirical counterpart in Equation 15) in [Moosavi-Dezfooli et al, DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks, 2016].\\n\\nReporting area under the attack curve serves two purposes. Firstly, it addresses the potential danger of overfitting to a specific attack epsilon. Secondly, it mimics the realistic scenario in which the attacker tries to fool the classifier with as small a perturbation as possible. That said, we believe that an even more realistic performance measure would give less weight to larger perturbations that are easier to detect and give relatively more weight to smaller ones that are harder to detect. (Note, the numbers we report give equal weight to different perturbation strengths.)\"}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 2)\", \"comment\": \"- Centered vs. uncentered corruption model -\\nIndeed, centered vs. uncentered distribution of perturbations refers to whether E_Q[\\\\xi] is zero or not, as stated in Equation 12. We empirically observed that the mean adversarial perturbation is very close to zero, which is why we used the centered SGR regularizer in Equation 11 in all our experiments.\\n\\n- Figure 5: Long-range structured covariance matrices for increasing decay lengths -\\nThe covariance matrices in Figure 5 were generated according to the intra-channel and inter-channel covariance functions discussed in Section 4.3. The periodic patterns are in part a result of the fact that the 2D image is first flattened into a 1D vector in order to plot the covariance matrix. Visual patterns then emerge because correlations for pixels at opposite ends of two neighboring rows are plotted next to each other due to the flattening..\\n\\n- Long-range correlated attack -\\nThe LRC attack is indeed a sampling-based natural prototype for low frequency perturbations. See also our reply titled \\\"The purpose of the LRC attack experiment is to establish whether there is a potential benefit in using a structured covariance matrix in the SGR regularizer.\\\"\"}",
"{\"title\": \"Detailed reply, highlighting our main contributions (part 1)\", \"comment\": \"We would like to thank the reviewer for his/her valuable feedback.\\n\\n- PGD attack iterations -\\nRegarding PGD iterations, we would like to quote [Madry A. et al. Towards deep learning models resistant to adversarial attacks, 2017.], who reported whitebox attack accuracies for PGD with ** 7 iterations ** (see Table 2): \\u201cFor the CIFAR10 dataset, [...] we trained the network against a PGD adversary with l_infty projected gradient descent again, this time using 7 steps of size 2, and a total \\u03b5 = 8.\\u201d That said, we don\\u2019t think that more iterations and random restarts would change the qualitative picture of our evaluations.\\n\\n- Attack accuracy vs area under the attack curve -\", \"reporting_area_under_the_attack_curve_serves_two_purposes\": \"Firstly, it addresses the potential danger of overfitting to a specific epsilon attack. Secondly, it mimics the realistic scenario in which the attacker tries to fool the classifier with as small a perturbation as possible. That said, we believe that an even more realistic performance measure would give less weight to larger perturbations that are easier to detect and give relatively more weight to smaller ones that are harder to detect.\\n\\n- Robustness under the strongest whitebox attack should be the benchmark -\\nWe disagree with this statement for two reasons. Firstly, without reference to an attack \\u201cbudget\\u201d, more precisely a (distributional) uncertainty set as well as an upper bound on computational resources to search for worst case perturbations, the notion of \\u201cstrongest\\u201d is ill-defined. Even if we agree on a computational budget, the question remains of how to define or measure the strength of perturbations - norm-based, perceptually similar, etc. Secondly, robustness comes at a price: rather than aiming for robustness against the strongest attack, we believe that one should aim for an optimal trade-off between robustness and clean accuracy. In that sense, it is debatable whether training methods that considerably reduce clean accuracy even deserve to be called robust. It is worth noting that this latter point has long been understood in the statistics community, see for instance P.J. Huber\\u2019s book on Robust Statistics.\\n\\n- Adversarial robustness via integrating over perturbations -\\nWe propose to efficiently approximate expectations over corrupted distributions through structure-informed regularization, as outlined in Section 2.2 (see Equation 3) and Section 3. Conceptually, our starting point is to learn a corruption model, i.e. to use a generative mechanism to learn adversarial perturbations from examples. ** Integrating over these corruptions is not the same as integrating over the neighborhood, however **. The intuition is that if the model is robust against the entire distribution of perturbations, it should also be robust against point-wise perturbations (from which the corruption model was learned). In practice, we propose to approximate such a corruption model by adaptively learning the structure of adversarial perturbations.\\n\\nOne of the main contributions of our work is to ** derive structured gradient regularization ** as a tractable approximation to training with correlated perturbations. This is in line with a large body of work on the equivalence between regularization and robust optimization. See our reply titled \\\"Our work is a strict generalization of previous work and regularization was proven to be equivalent to robust optimization in certain settings.\\\" for a list of references.\\n\\n- Bayes optimal classifier -\\nThe underlying assumption is that Eq. (10) and Eq. (5) coincide to order O(||\\\\xi ||^3) at the optimum, which is within the precision to which we truncate. This assumption is rather common in the literature, see e.g. [Bishop. Training with noise is equivalent to tikhonov regularization., 1995] or [An, G. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance. 1996]. Alternatively, Eq. (10) can also be seen as a Levenberg-Marquart approximation of Eq. (5), if one does not want to invoke the Bayes optimality argument, see Section 5.4.1 in Bishop\\u2019s Pattern Recognition and Machine Learning book.\\n\\n- Covariance structure too coarse as a measure of attack power? -\\nWhether or not covariance structure is a good measure to distinguish different attacks depends on the entirety of attacks under consideration. It could be that both PGD and FGSM are members of the same \\u201cuniversality class\\u201d of adversarial attacks. After all, if we compare those two attacks with the entirety of all imaginable attacks, they are probably rather similar compared to other, e.g. gradient-free attacks. Nevertheless, we agree that it would be interesting to further explore the connection between covariance structure and attack power.\"}",
"{\"title\": \"Trying to address an important problem, but approach/results are not convincing\", \"review\": \"The authors propose a new defense against adversarial examples that relies on a data-dependent regularization (instead of adversarial training). They then benchmark the performance of this new defense against popular white-box and transfer attacks, as well as propose a new long range correlated adversarial attack.\", \"comments\": \"I find the premise of this paper interesting - developing regularization strategies to help with generalization to adversarial perturbations. For instance, it is well known that state-of-the-art defenses such as PGD have generalization gaps as large as 50% between robust train and test accuracies. It has also been previously hypothesized that this could be due to a data scarcity problem [Schmidt et al., 2018]. The authors here propose to tackle this problem using a new data-dependent regularization technique. \\n\\nMy primary issue with this paper is that the authors do not clearly illustrate what the advantage of their method over standard methods is\\n- The problem this paper aims to solve is overfitting to a specific attack/virtual adversarial examples presented during adversarial training by using regularization instead. However, the authors do not actually illustrate that their technique reduces overfitting. For instance, the authors do not contrast the robust train-test accuracies using their method to other standard methods. Thus it is not clear that this paper met the objectives laid out in the introduction. \\n- The claim in this paper is that SGR helps against attacks with long range dependencies. However, in their experiments (e.g., in Figure 3), the authors do not evaluate other standard defenses. It is thus unclear whether other standard methods are already robust to such attacks. In fact, based on the results of Table 1, it doesn\\u2019t seem like attacks from SGR are able to reduce the robustness of PGD/FGSM trained models.\\n\\nBecause of these two points, along with the lower robustness to various attacks (in Table 1) as compared to approaches such as PGD, it is not really clear to me what the real merit of this new approach is. Ultimately, having a defense which is more robust to a particular attack is not very meaningful if there exists an alternative attack that reduces the robustness of the defense.\\n\\nI am also surprised that the authors chose to use this regularization as an alternative to adversarial training instead of complementary to it. I would be interested to see if such regularization could actually help to bridge the generalization gap observed while using adversarial training.\\n\\nThe paper is at times is poorly written and confusing. For instance, the description of CovFun is hard to parse. The authors should make this explanation more clear. The authors also do not state what their attack model is - Linf vs L2 perturbations. They also choose to evaluate attacks differently, using an average accuracy over different epsilons rather than reporting individual accuracies. This does make the results harder to compare to other work. The authors should include a full table of individual accuracies (at least in the appendix) to make the numbers easier to parse and compare.\\n\\nIn the derivation in Section 3.1, the authors use the assumption that the robust classifier is almost equal to the Bayes optimal classifier to justify dropping terms corresponding to the Hessian(\\\\phi_y). I am not sure how realistic this assumption is in the adversarial setting - one can construct simple distributions for which the Bayes optimal classifier is not the robust classifier.\\n\\nWith regards to Figure 3, the authors state -\\n\\u201cAs the decay length goes to zero, the synthetic covariance matrix converges to the identity matrix and SGR performance approaches GN performance\\u201d \\nCould the authors clarify why this is obvious? After all these two models are trained very differently.\\n\\nThe plot in Figure 3 and the results in Table 1 seems to illustrate that SGR is no better than GN as you can find an attack where they perform as well/badly. The authors say that this is due to the short-range nature of current attacks. I do not understand this rationale though - the goal of the defenses should be to be more robust to all attacks, both short range and long range. Thus arguing that there may be an attack under which their model performs better is not sufficient. I do agree that finding long range attacks that can break current SOTA robust models would be interesting, however the authors do not seem to achieve that in this work.\\n\\nI find the observation on transfer attacks interesting - PGD attacks from SGR/GN models are better than PGD models. Do the authors have any insight as to why this is the case?\\n\\nIn general, my concern about gradient regularization based defenses is that they only give a very local picture of the landscape and thus can only protect against small eps attacks. This could probably explain why the SGR/GN models are less robust than PGD. As mentioned previously, it would be valuable to see accuracies against individual eps values (rather than averaged) to understand this better. If this is the case, this regularization would not provide any additional benefits when combined with adversarial training either.\", \"references\": \"Schmidt, Ludwig, et al. \\\"Adversarially Robust Generalization Requires More Data.\\\" arXiv preprint arXiv:1804.11285 (2018).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some interesting ideas but unconvincing empirical evaluation\", \"review\": \"Short paper summary: This work proposes a novel method of gradient regularization (SGR) which utilizes the covariance structure of adversarial examples generated during training. The authors propose simple techniques to reduce the computational overhead of SGR. Empirically, the authors compare their method to standard adversarial training and gradient norm regularization.\", \"brief_review_summary\": \"There are some interesting ideas in this work but I feel that the some practical aspects lack formal justification and the comparison to existing work is inconclusive.\", \"detailed_comments\": \"In addition to some minor comments, I have two concerns. First, with the SGR algorithm itself. And second with the empirical analysis. While I suspect that the first concern may be clarified with discussion I think that the second is more serious and is the primary factor behind my review score.\\n\\n1) As the SGR algorithm is written I wonder whether the regularization term may be computed more efficiently using something like a Hutchinson trace estimation trick. I suspect that if the random vector used to estimate the trace was the xi from Algorithm 1 then the same Mahalanobis gradient norm would be recovered. This would hold only in the case beta=1, bringing me to my second point.\\n\\n2) What is the purpose of the running average of the covariance? A relatively small beta value is used in practice but I do not see any strong justification for this. Is there a good reason why we do not want the covariance matrix to be a close approximation for the local gradient landscape? This seems like an important part of the algorithm, especially as it may shed light on my next note.\\n\\n3) In practice, Algorithm 1 uses adversarial attack schemes to generate the perturbations. In simple cases like FGM, this would give the covariance of the input-output gradient which seems that it would have a direct interpretation as a form of classical gradient regularization. To this extent, I also wonder how the SGR algorithm could be related to interpretations of adversarial training as gradient smoothing (when using small perturbations).\\n\\nI recognize that the above points are (so far as I could tell) not directly addressed in the work, and some may be fairly considered out of scope. However, due to the direct comparison to adversarial training later and the need to tie SGR to adversarial attacks I feel that it would be important to distinguish these cases.\\n\\nOverall, I felt that the first three sections did well to introduce the motivation and techniques used and were was easy to follow. The derivation of the SGR algorithm was clear and concise but I believe that some of the practical details (covariance running average, computational efficiency [at first glance, it looks like the full Jacobian must be computed, but practically the sum over K reduces this to a single backprop call]) could have been elaborated on.\\n\\nFor the empirical evaluation the authors provided ample detail on the experimental set up and have performed a fairly thorough investigation in terms of existing defenses and attacks. I felt that the bulk of the study which is contained in Table 1 is fairly inconclusive or at the very least, difficult to interpret completely. Additional comments:\\n\\n4) I felt that Figure 1 and 2 are a little difficult to interpret at first. It would help to clearly define what is meant by short- and long-range signal corruptions. However, they do suggest some interesting findings. As these covariance matrices depend directly on the model itself, I think it is worth investigate (or commenting on) how this structure may change when introducing things like SGR (or GN). The authors claim that unregularized classifiers give too much weight to short range correlations but they should show that SGN (or other methods) correct this.\\n\\n5) My biggest concern with this work is with the results presented in Table 1. In terms of how they are presented: first I think that the fool column requires further explanation, or perhaps more simply the column could show accuracy instead of the average perturbation size. Second, I am not sure why the reported accuracies are averaged over attack strengths in a range. So far as I am aware, this is not standard and makes it difficult to interpret the performance of the models in this way. Figure 4 in the appendix does a better job of describing the behavior over a range of attack strengths.\\n\\n6) From the table, it is not obvious to me that SGR provides any improvements to robustness over existing techniques. Indeed, the authors write that SGR achieves white-box accuracies which are between those of the clean and adversarially trained models and claim that SGR improves on the clean accuracy for CIFAR-10. But in the table the gap between FGSM and GN/SGR clean accuracies seem fairly small with FGSM providing better robustness (for most source attacks). Even more concerning, is the fact that GN seems to outperform SGR. I do not find these results substantial enough to motivate SGR as a robustness defense compared with adversarial training (or even GN), especially as SGR has the same computational limitations involved with expensive adversarial perturbations.\\n\\n\\nI felt that the study into the covariance structure of adversarial perturbations was interesting but as it stands was not complete enough to be informative in general. In the conclusion the authors write that they provide evidence that current adversarial attacks act by perturbing the short-range correlations of signals but this has only been confirmed for unregularized classifiers. Despite these issues, I thought that the paper was well written and hope that the empirical study can be improved and clarified.\", \"minor_comments\": [\"Section 2.1, set of transformations only introduced briefly then forgotten. Leaving output invariant confused me, as this does not apply to adversarial examples.\", \"Section 2.3, second paragraph l3: In Maaten et al. should be citet.\", \"Section 3.1, should make clear that derivative is with respect to the data.\", \"Section 3.1, define delta as the Hessian clearly (it is used for the simplex in the previous section). Though this is easy to figure out.\", \"Section 7.1, starts with (iii), is this intentional? Perhaps an introductory sentence could make this clearer.\", \"Section 7.3, for label leaking, I'm not convinced by this argument alone. Assuming the covariance structure is still computed from a particular adversarial example, I see no compelling reason that this would not occur.\"], \"clarity\": \"The paper is very clearly written and is easy to follow.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"simple and reasonable idea, somewhat unconvincing theoretical analysis, weak experiments\", \"review\": \"Summary of the paper:\\nThis paper proposes to use structured gradient regularization to increase adversarial robustness of neural network. Here, the gradient regularization is to regularize some norm of the gradients on neural network input. \\\"structured\\\" means that instead of just minimizing the L2 norm of the gradients, a \\\"mahalanobis norm\\\" is minimized. The covariance matrix is updated continuously to track the \\\"structure\\\" of gradients/perturbations. Whitebox attack and blackbox attack \\n\\nThe paper is well written, both theory and experiments are well explained. The analysis of LRC attack on SGR trained models are interesting.\\n\\nHowever, I believe the paper has major flaws in several aspects.\\n\\nThe whitebox robustness evaluation is weak. Whitebox PGD with 10 iterations is not enough for discovering true robustness of a neural network, which makes the experiments unconvincing. PGD with 100 iterations and 50 random starts would make the evaluation much convincing wrt to whitebox attack. https://github.com/MadryLab/mnist_challenge\\nI noticed that in Table 1, the authors reported averaged results across different epsilons. Although I see the motivation to give equal weights to small and large perturbations, it makes it hard to compare with previous papers. I think the authors should a least report commonly used eps in the literature, including MNIST eps=0.1, 0.2, 0.3 and CIFAR10 eps=8/255. Currently, for MNIST eps=32/255=0.125 is much below the standard eps for benchmarking MNIST.\\n\\nIn my opinion, when evaluating robust optimization / gradient regularization methods, robustness under the strongest whitebox should be the major benchmark. Because \\\"intrinsic\\\" robustness is their goal. In contrast, black-box results are less important. This is because 1) evaluating black-box robustness on a few attacks hardly give any conclusive statements; 2) if we're pursuing black-box robustness, there're many randomization methods that boosts black-box robustness under various settings. How does a gradient regularization method help on top of those should be at least evaluated.\\nSo if the paper wants to claim black-box robustness, it needs at least include experiments like 2), so it provides useful benchmarks to practitioners.\\n\\nThere're also a few problems in the motivation / analysis. \\n\\\"\\\"\\\"A remedy to these problems is through the use of regularization. The basic idea is simple: instead of sampling virtual examples, one tries to calculate the corresponding integrals in closed form, at least under reasonable approximations.\\\"\\\"\\\"\\nThe adversarial robustness problem is not about integral over a neighborhood, it is about the maximum loss over a neighborhood. This is likely why previous attempts on gradient regularization and adversarial training on FGSM attack fails. And the success is of PGD training is largely due to that the loss minimize over the adversarial example that gives the maximum loss.\\n\\n\\\"\\\"\\\"Thus, under the assumption that \\\\phi \\\\approx \\\\phi^* and of small perturbations (such that we can ignore higher order terms.\\\"\\\"\\\"\\nThe Bayes optimal assumption seems to be arbitrary to me. If \\\\phi is nearly Bayes-optimal, why would we worry about adversarial examples?\\n\\n\\n\\nOther relatively minor problems\\n\\nIn the caption of Figure 1, \\\"\\\"\\\"Covariance matrices of PGD, FGSM and DeepFool perturbations as well as CIFAR10 training set (for comparison). The short-range structure of the perturbations is clearly visible. It is also apparent that the first two attack methods yield perturbations with almost identical covariance structure.\\\"\\\"\\\"\\nPGD and FGSM have very different attack power. If they are similar by any measure, wouldn't that mean the measure (covariance structure) is too coarse?\\n\\nIn Section 3.1, the paper talks about both centered and uncentered adversarial examples.\\nI assumed that the authors mean that the distribution of perturbations are centered?\\nFirst, I think this the authors should make this more explicit.\\nSecond, I think this is not a realistic to assume the perturbations to be centered, because for image data, the epsilon-ball usually intersects with data domain boundary. So I'm wondering in the experiments, which version was used? centered or uncentered?\\n\\nFigure 5 shows periodic patterns on covariance matrices. I didn't find explanation of the periodic patterns in the covariance matrices. It would nice if the authors can explain it or point me the relevant sections in the paper.\\n\\nI don't fully get the idea of LRC attack. Is it purely sampling? are there optimization involved?\\n\\nFigure 3, I suggest the authors show perturbations with different decay lengths on the same original images, which would make it easier to compare.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Our work is a strict generalization of previous work and regularization was proven to be equivalent to robust optimization in certain settings.\", \"comment\": \"- How is this significantly different from previous defenses based on gradient regularization?\\n\\nTo the best of our knowledge, gradient regularization as a method to improve adversarial robustness has been studied in two other concurrent works, both of them are cited [A.S. Ross & F. Doshi-Velez, \\u201cImproving Adversarial Robustness and Interpretability of DNNs by Regularizing Input Gradients\\u201d, 2017] and [C.J. Simon-Gabriel et al. \\u201cAdversarial Vulnerability of Neural Networks Increases With Input Dimension\\u201d, 2018].\\n\\nFirstly, our work goes a lot further in terms of theoretical justification for gradient regularization than both of these: we follow a principled approach to derive structured gradient regularization as a tractable approximation to training with correlated perturbations.\\n\\nSecondly, our structured gradient regularizer (SGR) is a strict generalization of gradient norm (GN) regularization: while GN provides an approximation to training with white noise, SGR provides an approximation to training with arbitrarily correlated noise. Moreover, regularization has been shown to be equivalent to robust optimization in certain settings, see below.\\n\\n\\n- Why would gradient regularization w.r.t. Mahalanobis distance (i.e. SGR) be any better than gradient regularization w.r.t. L2 norm?\\n\\nFirstly, gradient norm regularization based on L2 norm assumes isotropic white-noise, whereas Mahalanobis-distance based SGR operates with arbitrarily correlated noise.\\n\\nSecondly, SGR can leverage the fact that adversarial examples might live in low-dimensional subspaces. Quoting from [Moosavi-Dezfooli et al, \\u201cUniversal adversarial perturbations\\u201d, 2017]: \\u201cWe hypothesize that the existence of universal perturbations fooling most natural images is partly due to the existence of such a low-dimensional sub-space that captures the correlations among different regions of the decision boundary.\\u201d SGR can leverage this by penalizing gradients that lie within such a subspace more strongly than gradients that lie outside it.\\n\\n\\n- Gradient-norm regularization has been tried many times and does not work as a defense against adversarial examples.\\n\\nFirst of all, could you please provide references to papers where gradient regularization was the main method of defense (i.e. where it was not just used as a baseline) and was shown not to work?\\n\\nSecondly, this statement is unqualified: to be precise, you need to (i) state how you measure performance, i.e. how you define whether some method \\u201cworks\\u201d and (ii) what kind of threat model you assume, i.e. what kind of \\u201cadversarial examples\\u201d you want to robustify against. E.g. gradient-based or gradient-free, white-box or transfer/black-box attacks, whether the perturbations are constrained in magnitude or whether they are constrained by the counting-norm etc. In your statement you seem to make specific assumptions, which is why it is not true in the generality in which it was formulated.\\n\\nFor instance, if we take transfer attack accuracies as our measure of robustness and PGD transfer attacks as the threat-model, corresponding to the bold-face numbers in Table 1, then SGR and GN are statistically on par with PGD and FGSM trained models on CIFAR10, compare the bold-face numbers in each row.\\n\\n\\n- Gradient regularization does not work because it is based on derivatives and thus is designed to resist only infinitesimal perturbations.\\n\\nThere is a large body of work on the equivalence of regularization and robust optimization (adversarial training is a special case of robust optimization against a pointwise adversary that independently perturbs each example):\\n\\n[Bertsimas and Copenhaver, \\u201cCharacterization of the equivalence of robustification and regularization in linear and matrix regression\\u201d 2018], showed that in linear regression robust optimization for matrix-norm uncertainty sets and regularization are exactly equivalent. There is also a variety of settings for robust optimization under more general uncertainty sets in which regularization provides upper and lower bounds. See also [El Ghaoui and Lebret, \\u201cRobust solutions to least-squares problems with uncertain data\\u201d 1997].\\n\\n[Xu et al., \\u201cRobustness and regularization of support vector machines\\u201c 2009] established equivalence of robust optimization and regularization for Support Vector Machines.\\n\\nMore recently, [Gao et al., \\u201cWasserstein distributional robustness and regularization in statistical learning\\u201d 2017] showed that Wasserstein-distance based distributionally robust stochastic optimization (Wasserstein-DRSO) is first order equivalent to gradient regularization.\\n\\nThese works clearly contradict your statement that gradient regularization does not work.\"}",
"{\"title\": \"Please read the main text, we give a very precise explanation about what we report and we establish fair comparisons to the best of our abilities.\", \"comment\": \"Firstly, we would like to emphasize that the text is very precise about what the numbers reported in the tables. As stated in the first paragraph of Section 4.4 as well as in the caption of Table 1, we report white-box and transfer attack accuracies averaged over attack strengths in the range [0, 32] for MNIST and [0, 8] for CIFAR10.\\n\\nSecondly, we establish a fair comparison between regularized models and adversarially trained ones, in that we train each architecture with various different training methods, including PGD-augmented training suggested in Madry et al. ***. In fact, for PGD and FGSM adversarial training, we trained models with each integer epsilon in the range [0, 32] for MNIST and [0, 8] for CIFAR10 and report results for the best performing one. The hyperparameters of the best performing models are reported in Section 7.2 in the Appendix. \\n\\nTo the best of our knowledge, Madry et al. used different architectures and possibly different data preprocessing and data augmentation schemes. \\n\\nAs a side note, we believe that an even more realistic performance measure should give less weight to larger perturbations which are easier to detect and give relatively more weight to smaller ones that are harder to detect. The averaged attack accuracies we report give equal weight to different perturbation strengths.\\n\\n\\n***: We assume that by \\u201cMadry et al. 2017\\u201d you meant [Madry et al., \\u201cTowards Deep Learning Models Resistant to Adversarial Attacks\\u201d 2017]\"}",
"{\"title\": \"The purpose of the LRC attack experiment is to establish whether there is a potential benefit in using a structured covariance matrix in the SGR regularizer.\", \"comment\": \"Thank you for raising this question for which we believe we need to reiterate several points.\\n\\nFirst of all, we did not design the LRC attack with the purpose to be easy to beat. If you read our submission carefully, you will notice that the purpose of the LRC attack experiment is to establish whether there is a potential benefit in using a structured covariance matrix in the SGR regularizer versus using an \\u201cunstructured\\u201d diagonal covariance (corresponding to gradient-norm regularization) in the presence of long-range correlated noise. In other words, this experiment simply tests whether the SGR regularizer extracts useful information about the long-range correlation structure of the perturbations, which it indeed does.\\n\\nSecondly, we do not claim that LRC is stronger than other pre-existing attacks. In your criticism, you seem to imply that we claimed that structured gradient regularization defends against pre-existing attacks because it performs well against long-range correlated perturbations, but we did not say that in our submission. Such a claim could be made if one showed that a new attack is stronger than existing ones and that a new defense protects against this new attack. We do not claim that however. Instead, and to the best of our ability, we transparently evaluate regularized and adversarially trained models against pre-existing white-box and transfer attacks in Section 4.4. \\n\\nThe LRC attack is nothing but a natural prototype for low frequency perturbations, as opposed to existing attacks which we have shown to mainly corrupt the short range (high frequency) structure of signals. As stated in the conclusion, devising further (e.g. gradient-based) low frequency attacks is an interesting direction of future research.\"}",
"{\"title\": \"The Fool column reports the noise-to-signal ratio of the DeepFool attack. Those numbers are not accuracies.\", \"comment\": [\"The right way to read adversarial vulnerability tables is to take the min accuracy across different attacks\", \"We totally agree with this statement. We report various different attacks so that the reader can draw his or her own conclusions.\", \"In table 1 it looks like the \\\"fool\\\" attack is able to completely break the proposed defense, resulting in < 1% accuracy.\", \"The numbers reported in the Fool column are not accuracies (which is why we did not use % sign but reported decimal numbers). If you read our paper carefully, the Fool column reports the noise-to-signal ratio of the DeepFool attack computed according to Eq.(2) in [Moosavi Dezfooli et al., \\u201cDeepfool: a simple and accurate method to fool deep neural networks.\\u201d 2016], as stated in the Experimental Setup Section 4.1.\", \"We have checked our implementations and run many sanity-checks and we do believe they are correct. We do not see any evidence pointing to the contrary in our results.\"]}",
"{\"title\": \"SGR and GN trained models achieve statistically indistinguishable attack accuracies (within one standard deviation of each other).\", \"comment\": \"- The \\\"fool\\\" column is strange, and if we can trust it, then all the defenses are shown to be completely broken.\\n\\nSee [Moosavi Dezfooli et al., \\u201cDeepfool: a simple and accurate method to fool deep neural networks.\\u201d 2016] on how to interpret those numbers correctly.\\n\\n- Worse than the baseline of just doing gradient regularization with no Mahalanobis distance?\\n\\nThe PGD white-box attack accuracies reported for SGR and GN trained models are within one standard deviation of each other, which is $\\\\sigma = 0.5$ (computed over 10 runs). What we can conclude from this table is that SGR and GN trained models achieve statistically indistinguishable accuracies for these particular results.\\n\\nAs stated in Section 4.4, SGR/GN trained models achieve white-box attack accuracies that are intermediate between those of the clean model and adversarially trained models. Note, however, that we do not equate \\u201crobustness\\u201d with \\u201cwhite-box attack accuracy\\u201d. If we look at the transfer-attack accuracies (bold-face numbers), then SGR and GN trained models are statistically on par with adversarially trained models.\"}",
"{\"comment\": \"As mentioned in another comment, the \\\"fool\\\" column is strange, and if we can trust it, then all the defenses are shown to be completely broken.\\n\\nIf we ignore the fool column and just look at the other columns that seem more believable, then how does this model look on CIFAR-10?\\nThe strongest attack against it is PGD, which results in an accuracy AUC of 41.5.\\nThis is worse than the baseline of just doing gradient regularization with no Mahalanobis distance, which has a worst-case accuracy AUC of 41.9.\\nIt is also worse than either of the two defenses based on adversarial training (which get 55 and 62 AUC).\\n\\nWe also see more or less the same thing on MNIST. Here the strongest attack against the proposed SGR defense is T-PGD, which gets 96.5 AUC. Traditional gradient regularization actually ties it, also with 96.5 AUC, just with a different attack causing the worst case performance. Both of the defenses based on adversarial training perform strictly better.\", \"title\": \"Table 1 shows the defense is worse than the baseline\"}",
"{\"comment\": \"The right way to read adversarial vulnerability tables is to take the min accuracy across different attacks: it doesn't matter if your defense is good at beating a lot of attack algorithms; if there is one attack that performs well then an attacker will use that.\\n\\nIn table 1 it looks like the \\\"fool\\\" attack is able to completely break the proposed defense, resulting in < 1% accuracy.\\n\\nHowever, there are some other things that are weird. For example, the \\\"fool\\\" column also reports < 1% accuracy for a PGD-trained model. DeepFool is not previously known to break PGD-trained models, so this either indicates an interesting research finding, or a bug in your accuracy calculations, or a bug in your PGD-trained model.\", \"title\": \"What is the \\\"fool\\\" column of table 1?\"}",
"{\"comment\": \"If I understand section 4.3 correctly, you introduce the LRC attack because you expect that your proposed defense will be able to beat it. This is not the way that you should evaluate new defense papers. New defenses should perform well against pre-existing attacks. Papers on new defenses sometimes need to introduce new attacks, but these should be new attacks that are *hard* for the defense to beat, not attacks that are designed to be *easy* for the defense to beat. For example, a new defense based on non-differentiable operations might perform poorly against pre-existing gradient-based attacks, so to evaluate it properly it is necessary to introduce new gradient-free attacks.\", \"title\": \"The motivation for introducing the long-range correlated noise attack seems backward\"}",
"{\"comment\": \"Table 1 apparently shows areas under attack curves for varying epsilon (\\\"The white-box and transfer attack accuracies are averaged over attack strengths in the range \\u2208 [0, 32] for MNIST and \\u000f \\u2208 [0, 8] for CIFAR10, i.e. the reported accuracies represent the integrated area under the attack curve.\\\"). This makes it hard to compare to previous work such as Madry et al 2017, who report attack success rate for the largest value of epsilon. Does the paper report the attack success rate for epsilon=8 specifically?\", \"title\": \"Do you report attack success rate for a specific epsilon?\"}",
"{\"comment\": \"Regularizing the norm of the gradient of the output log probability with respect to the input has been tried many times and does not work as a defense against adversarial examples.\\n\\nThis work essentially proposes to use a Mahalanobis norm ( g^T A g) rather than a squared L2 norm (g^T g) for the gradient penalty. Why would this be any better?\\n\\nGradient regularization does not work because it is based on derivatives and thus is designed to resist only infinitesimal perturbations. It cannot \\\"see\\\" the way that finite-sized perturbations cross relu boundaries and so on. Using a Mahalanobis norm rather than an L2 norm doesn't address this fundamental limitation of gradient regularization. All it does is penalize the gradient more in some directions than others.\\n\\nIf anything, using a Mahalanobis norm seems like it should create more opportunities for adversarial attacks to succeed in the directions that were downweighted.\", \"title\": \"How is this significantly different from previous broken defenses based on gradient regularization?\"}"
]
} |
|
rJNH6sAqY7 | On Computation and Generalization of Generative Adversarial Networks under Spectrum Control | [
"Haoming Jiang",
"Zhehui Chen",
"Minshuo Chen",
"Feng Liu",
"Dingding Wang",
"Tuo Zhao"
] | Generative Adversarial Networks (GANs), though powerful, is hard to train. Several recent works (Brock et al., 2016; Miyato et al., 2018) suggest that controlling the spectra of weight matrices in the discriminator can significantly improve the training of GANs. Motivated by their discovery, we propose a new framework for training GANs, which allows more flexible spectrum control (e.g., making the weight matrices of the discriminator have slow singular value decays). Specifically, we propose a new reparameterization approach for the weight matrices of the discriminator in GANs, which allows us to directly manipulate the spectra of the weight matrices through various regularizers and constraints, without intensively computing singular value decompositions. Theoretically, we further show that the spectrum control improves the generalization ability of GANs. Our experiments on CIFAR-10, STL-10, and ImgaeNet datasets confirm that compared to other competitors, our proposed method is capable of generating images with better or equal quality by utilizing spectral normalization and encouraging the slow singular value decay. | [
"gans",
"weight matrices",
"discriminator",
"computation",
"generalization",
"generative adversarial networks",
"spectrum control",
"spectra",
"powerful"
] | https://openreview.net/pdf?id=rJNH6sAqY7 | https://openreview.net/forum?id=rJNH6sAqY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BygGVm4HlE",
"r1lYoetby4",
"HylToq1YRm",
"Byx5-cyFRX",
"SJlu3tkKAX",
"SylcXqGF2Q",
"HyxiDSMOnX",
"HkledSLmnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545057066222,
1543766177380,
1543203492625,
1543203329616,
1543203247756,
1541118497770,
1541051746770,
1540740456241
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper801/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper801/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper801/Authors"
],
[
"ICLR.cc/2019/Conference/Paper801/Authors"
],
[
"ICLR.cc/2019/Conference/Paper801/Authors"
],
[
"ICLR.cc/2019/Conference/Paper801/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper801/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper801/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"All the reviewers agree that the paper has an interesting idea on regularizing the spectral norm of the weight matrices in GANs, and a generalization bound has been shown. The empirical result shows that indeed regularization improves the performance of the GANs. Based on these the AC suggested acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}",
"{\"title\": \"I vote Accept\", \"comment\": \"I find the authors' responses to the reviews and draft update convincing, and I choose to hold my score as a \\\"strong 7.\\\" With R3 updating to an 8, I think this paper is worthy of acceptance.\", \"one_quick_note_to_the_authors\": \"The caption in Figure 1 has a typo with \\\"inspection score.\\\"\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your valuable comments. We correct typos and grammatic mistakes in the revised version. In the following, we summarize your comments and our responses. Please also refer to the revised version for more details.\", \"comments\": \"Table 7 is confusing.\", \"response\": \"We make the following changes in Table 7 accordingly. We add captions and clarify the difference between the spectral normalization with power iteration and with SVD reparamerterization. We also explain why the spectrum distributes in a certain way for each regularizer in Appendix D.3. See more details in the Appendix of the revised version.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your valuable comments. We summarize our responses as follows and please refer to the revised version for more details.\\n\\n(1) We briefly mention stability issue in the introduction (we have revised the first sentence of the abstract). The stability issue exists generally for training GANs, which stems from solving for equilibria of the nonconvex-nonconcave min-max problem (see the second paragraph on page 2 of the revised version). Empirical success demonstrates that imposing regularizations, such as spectral normalization and gradient penalty, can ease such a stability issue. This motivates our proposed methodology to manipulate on the spectrum of the weight matrix. \\n\\n(2) We make a clarification in Section 2.2.2 of the revised version on the idea of ``slow decay of singular values'' by comparing orthogonal regularization (no decay), spectral normalization under power iteration (slow decay), and spectral normalization with SVD (fast decay). We also highlight that using power iteration for spectral normalization encourages a slow singular value decay, in contrast to the fast decay yielded by standard spectral normalization.\\n\\n(3) We explain the detailed implementation of convolutional layers in the second paragraph of Section 4 in the revised version.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your valuable comments. We discuss the raised questions as follows.\\n\\n[1] owe their empirical success of training SN-GAN to controlling the spectral norm while allowing flexibility. This perspective, however, is not very concrete. As we know, orthogonal regularization and spectral normalization with SVD can both control the spectral norm. Their empirical performance is actually worse than SN-GAN. For example, on the STL-10 dataset, SN-GAN achieves an inception score of 8.83, while spectral normalization with SVD only achieves 8.69 and orthogonal regularization achieves 8.77. The reason behind is that SN-GAN implements the spectral normalization via one-step power iteration. This procedure consistently underestimates spectral norms of weight matrices. Consequently, in addition to controlling the spectral norms, the spectral normalization in SN-GAN affects the whole spectrum of the weight matrix (encourages slow singular value decay as in Figure 1), which we refer to as ``flexibility''. Built upon these empirical observations, we conjecture that controlling the whole spectrum better improves the performance of GANs, which is further corroborated by our numerical experiments. This discussion has been added to the beginning of Section 2.2.2 in the revised version.\\n\\nTheorem 2 justifies the benefit of controlling spectral norms in GANs. [1] and our result both show that normalizing the largest singular value yields better performance than the original DC-GAN. On the other hand, as discussed in Remark 4, we are still lacking tools to characterize the effect of slow singular value decay in generalization as well as preventing mode collapse. Thus, we leave it for future investigation.\\n\\n[1] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral Normalization for Generative Adversarial Networks. Feb. 2018.\"}",
"{\"title\": \"Interesting paper that proposes a way to control the spectrum of the networks weights\", \"review\": \"The paper is a natural extension of [1] which shows the importance of spectral normalization to encourage diversity of the discriminator weights in a GAN. A simple and effective parametrization of the weights similar to SVD is used: W = USV^T is used along with an orthonormal penalty on U and V and spectral penalty to control the decay of the spectrum. Unlike other parametrizations of orthogonal matrices which are exact but computationally expensive, the proposed one tends to be very accurate in practice and much faster. A generalization bound is provided that shows the benefit of controlling the spectral norm. Experimental results show that the method is accurate in constraining the orthonormality of U and V and in controlling the spectrum. The experiments also show a marginal improvement of the proposed method over SN-GAN [1].\\nHowever, the following it is unclear why one would want to control the whole spectrum when theorem 2 only involves the spectral norm. In [1], it is argued that this encourages diversity in the weights which seems intuitive. However, it seems enough to use Spectral Normalization to achieve such purpose empirically according to that same paper. It would be perhaps good to have an example where SN fails to control the spectrum in a way that significantly impacts the performance of the algorithm while the proposed method doesn't.\\n\\nOverall the paper is clearly written and the proposed algorithm effectively controls the spectrum as shown experimentally, however, given that the idea is rather simple, it is important to show its significance with examples that clearly emphasize the importance of controlling the whole spectrum versus the spectral norm only.\", \"revision\": \"Figure 1 is convincing and hints to why SN-GAN acheives slow decay while in principle it only tries to control the spectral norm. I think this paper is a good contribution as it provides a simple and efficient algorithm to precisely control the spectrum. Moreover, a recent work ([2], theorem 1 ) provides theoretical evidence for the importance of controling the whole spectrum which makes this contribution even more relevant.\\n\\n\\n[1] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral Normalization for Generative Adversarial Networks. Feb. 2018.\\n[2] M. Arbel, D. J. Sutherland, M. Bin \\u0301kowski, and A. Gretton. On gradient regularizers for MMD GANs. NIPS 2018\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Improves stability of training of GANs\", \"review\": \"The paper builds on the experimental observations made in Miyato et al. (2018) in which the authors highlight the utility of spectral normalization of weight matrices in the discriminator of a GAN to improve the stability of the training process. The paper proposes to reparameterize the weight matrices by something that looks like the singular value decomposition, i.e. W = U E V^T. Four different techniques to control the spectrum of W by imposing various constraints on E have been discussed. For maintaining the orthonormality of U and V penalties are added to the cost function. The paper also derives a bound on the generalization error and experimentally shows the \\\"desirable slow decay\\\" of singular values in weight matrices of the discriminator. Other experiments which compare the proposed approach with the SN-GAN have also been given.\\n \\n(1)The paper puts a lot of stress on the stability of the training process in the beginning but clear experiments supporting their claim related to improved \\\"stability\\\" are lacking. \\n(2)It would be helpful for the readers if more clarity is added to the paper with respect to the desirability of \\\"slow decay of singular values\\\" and spectral normalization.\\n(3)The point regarding convolutional layers should be part of the main paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Reviewer 2 Review\", \"review\": \"This paper proposes to parameterize the weight matrices of neural nets using the SVD, with approximate orthogonality enforced on the singular vectors using Orthogonal Regularization (as opposed to e.g. the Cayley transform or optimizing on the Stiefel manifold), allowing for direct, efficient control over the spectra. The method is applied to GAN discriminators to stabilize training as a natural extension of Spectral Normalization. This method incurs a slight memory and compute cost and achieves a minor performance improvement over Spectral Normalization on two benchmark image generation tasks.\\n\\nI'm a bit back and forth on this paper. On the one hand, I think the ideas this paper proposes are very interesting and could provide a strong basis off which future work can be built--the extension of spectral normalization to further study and manipulation of the spectra is natural and very promising. However, the results obtained are not particularly strong, and as they stand do not, in my opinion, justify the increased compute and memory cost of the proposed methods. The paper's presentation also wavers between being strong (there were some sections I read and immediately understood) and impenetrable (there were other sections which I had to read 5-10 times just to try and grip what was going on).\\n\\nUltimately, my vote is for acceptance. I think that we should not throw out a work with interesting and potentially useful ideas just because it does not set a new SOTA, especially when the current trend with GANs seems to suggest that top performance comes at a compute cost that all but a few groups do not have access to. With another editing pass to improve language and presentation this would be a strong, relevant paper worthy of the attention of the ICLR community.\", \"my_notes\": \"-The key idea of parameterizing matrices as the SVD by construction, but using a regularizer to properly constrain U and V (instead of the expensive Cayley transform, or trying to pin the matrices to the Stifel manifold) is very intriguing, and I think there is a lot of potential here.\\n\\n-This paper suffers from a high degree of mathiness, substituting dense notation in places where verbal explanation would be more appropriate. There are several spots where explaining the intuition behind a given idea (particularly when proposing the various spectrum regularizers) would be far more effective than the huge amount of notation. In the author's defense, the notation is generally used as effectively as it could be. My issue is that it often is just insufficient, and communication would be better served with more illustrative figures and/or language.\\n\\n-I found the way the paper references Figure 1 confusing. The decays are substantially different for each layer--are these *all* supposed to be examples of slow decay? Layer 6 appears to have 90% of its singular values below 0.5, while layer 0 has more than 50%. If this is slow decay, what does an undesirable fast decay look like? Isn't the fast decay as shown in figure 2 almost exactly what we see for Layer 6 in figure 1? What is the significance of the sharp drop that occurs after some set number of singular values? The figure itself is easy to understand, but the way the authors repeatedly refer to it as an example of smooth singular decays is confusing.\\n\\n-what is D-optimal design? This is not something commonly known in the ML literature. The authors should explain what exactly that D-optimal regularizer does, and elucidate its backward dynamics (in an appendix if space does not permit it in the main body). Does it encourage all singular values to have similar values? Does it push them all towards 1? I found the brief explanation (\\\"encourages a slow singular value decay\\\") to be too brief--consider adding a plot of the D-optimal spectrum to Figure 1, so that the reader can easily see how it would compare to the observed spectra. Ideally, the authors would show an example of the target spectra for each of the proposed regularizers in Figure 1. This might also help elucidate what the authors consider a desirable singular value decay, and mollify some of the issues I take with the way the paper references figure 1.\\n\\n-The explanation of the Divergence Regularizer is similarly confusing and suffers from mathiness, a fact which I believe is further exacerbated by its somewhat odd motivation. Why, if the end result is a reference curve toward which the spectra will be regularized, do the authors propose (1) a random variable which is a transformation of a gaussian (2) to take the PDF of that random variable (3) discretize the PDF (4) take the KL between a uniform discrete distribution and the discretized PMF and (5) ignore the normalization term? If the authors were actually working with random variables and proposing a divergence this might make sense, but the items under consideration are singular values which are non-stochastic parameters of a model, so treating them this way seems very odd. Based on figure 2 it looks like the resulting reference curves are fine, but the explanation of how to arrive there is quite convoluted--I would honestly have been more satisfied if the authors had simply designed a function (a polynomial logarithmic function perhaps) with a hyperparameter or two to control the curvature.\\n\\n-\\\"Our experimental results show that both combinations achieve an impressive results on CIFAR10 and STL-10 datasets\\\"\\nPlease do not use subjective adjectives like \\\"impressive.\\\" A 6.5% improvement is okay, but not very impressive, and when you use subjective language you run the risk of readers and reviewers subjectively disagreeing with you, as is the case with this reviewer. Please also fix the typo in this sentence, it should at least be \\\"...achieve [impressive] results\\\" or \\\"achieve an [impressive] improvement on...\\\"\", \"section_3\": \"-What is generalization supposed to mean in this context? It's unclear to me why this is at all relevant--is this supposed to be indicating the bounds for which the Discriminator will correctly distinguish real vs generated images? Or is there some other definition of generalization which is relevant? Does it actually matter for what we care about (training implicit generative models)? \\n\\n-What exactly is the use of this generalization bound? What does it tell us? What are the actual situations in which it holds? Is it possible that it will ever be relevant to training GANs or to developing new methods for training GANs?\", \"experiments\": \"-I appreciate that results are taken over 10 different random seeds.\\n\\n-If the choice of gamma is unimportant then why is it different for one experiment? I found footnote 4 confusing and contradictory. \\n\\n-For figure 3, I do not think that the margin is \\\"significant\\\"--it constitutes a relative 6.5% improvement, which I do not believe really justifies the increased complexity and compute cost of the method.\\n\\n-I appreciate Table 1 and Figure 4 for elucidating (a) how orthogonal the U and V matrices end up and (b) the observed decay of the spectra.\", \"appendix\": \"-Please change table 7 to be more readable, with captions underneath each figure rather than listed at the top and forcing readers to count the rows and match them to the caption. What is the difference between SN-GAN and Spectral Norm in this table? Or is that a typo, and it should be spectral-constraint?\\n\\n-I Would like to see a discussion of table 7 / interpretation of why the spectra look that way (and why they evolve that way over training) for each regularizer.\", \"minor\": \"-Typos and grammatical mistakes throughout.\\n-As per the CIFAR-10/100 website (https://www.cs.toronto.edu/~kriz/cifar.html) the Torralba citation is not the proper one for the CIFAR datasets, despite several recent papers which have used it.\\n-Intro, last paragraph, \\\"Generation bound\\\" should be generalization bound?\\n-Page 4, paragraph 2, last sentence, problem is misspelled.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJMHpjC9Ym | Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition | [
"Chun-Fu (Richard) Chen",
"Quanfu Fan",
"Neil Mallinar",
"Tom Sercu",
"Rogerio Feris"
] | In this paper, we propose a novel Convolutional Neural Network (CNN) architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. This is achieved by using a multi-branch network, which has different computational complexity at different branches with different resolutions. Through frequent merging of features from branches at distinct scales, our model obtains multi-scale features while using less computation. The proposed approach demonstrates improvement of model efficiency and performance on both object recognition and speech recognition tasks, using popular architectures including ResNet, ResNeXt and SEResNeXt. For object recognition, our approach reduces computation by 1/3 while improving accuracy significantly over 1% point than the baselines, and the computational savings can be higher up to 1/2 without compromising the accuracy. Our model also surpasses state-of-the-art CNN acceleration approaches by a large margin in terms of accuracy and FLOPs. On the task of speech recognition, our proposed multi-scale CNNs save 30% FLOPs with slightly better word error rates, showing good generalization across domains. | [
"CNN",
"multi-scale",
"efficiency",
"object recognition",
"speech recognition"
] | https://openreview.net/pdf?id=HJMHpjC9Ym | https://openreview.net/forum?id=HJMHpjC9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJgg51PHxN",
"Syll-AfcTQ",
"BJxO_af9pQ",
"Hye0Uaf96X",
"B1xmXpMqpm",
"BygYyRtc3Q",
"BJx2-Tcv3m",
"B1egeFYOom"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545068423762,
1542233592107,
1542233456498,
1542233429528,
1542233370927,
1541213664818,
1541020932027,
1540032743726
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper800/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper800/Authors"
],
[
"ICLR.cc/2019/Conference/Paper800/Authors"
],
[
"ICLR.cc/2019/Conference/Paper800/Authors"
],
[
"ICLR.cc/2019/Conference/Paper800/Authors"
],
[
"ICLR.cc/2019/Conference/Paper800/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper800/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper800/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper propose a novel CNN architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. reviewers generally arrived at a consensus on accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Simple and effective\"}",
"{\"title\": \"updated the pdf\", \"comment\": \"We updated the pdf to address the comments from the reviewers. (the revised parts are highlighted in blue.)\"}",
"{\"title\": \"feedback\", \"comment\": \"We thank the reviewer for the positive comments on our approach. We have included in Table 11 (Page 18) the results of bL-ResNet-50 and bL-ResNet-101 with alpha and beta both set to be 1. Not surprisingly, both models achieve the best accuracy, but they also become most costly in computation and are parameter heavy.\"}",
"{\"title\": \"feedback\", \"comment\": \"We thank the reviewer for the constructive comments.\\n\\n- Transfer capability of bLNet:\\nWe used bLNet as a backbone network for feature extraction in the Faster RCNN + FPN detector.\\nThe detection results on PASCAL VOC and COCO datasets are included in Table 10 in Appendix A6.\\nOur bLNet achieves comparable or better accuracy than the baseline detectors while reducing FLOPs by about 1.5 times.\\nPlease refer to Table 10 in Appendix A6 for more detail.\\n\\n- Memory requirements of bLNet:\\nWe benchmarked the GPU memory consumption in runtime at both the training and test phases for all the models evaluated in Fig. 3.\\nThe results are shown in Fig. 5 in Appendix A7. The batch size was set to 8, which is the largest number allowed for NASNet on a P100 GPU card (16 GiB memory). The image size for any model in this benchmark experiment is the same as that used in the experiment reported in Fig. 3. For bLNet, the input image size is 224x224 in training and 256x256 in test.\\n\\nFrom Fig. 5, we can see that bLNet is the most memory-efficient for training among all the approaches. \\nIn test, bL-ResNeXt consumes more memory than inception-resnet-v2 and inception-v4 at the same accuracy, \\nbut bL-SEResNeXt outperforms all the approaches. Note that NASNet and PNASNet are not memory friendly.\\nThis is largely because they are trained on a larger image size (331x331) and these models are composed of many layers.\"}",
"{\"title\": \"feedback\", \"comment\": \"We thank the reviewer for the positive comments on our approach. We have revised the manuscript to clarify our contributions in the introduction. For the parameters alpha and beta in bLNet, although they could be tuned for each layer, we fixed them (alpha=2 and beta=4) in all our experiments except in the ablation study. We found that this universal setting in general leads to good tradeoffs between accuracy and computation cost among all the models consistently. In the future, we are interested in exploring reinforcement learning to search for optimal alpha and beta to achieve a better tradeoff.\"}",
"{\"title\": \"Simple way to gain performance and computation\", \"review\": \"This paper presents a novel multi-scale architecture that achieves a better trade-off speed/accuracy than most of the previous models. The main idea is to decompose a convolution block into multiple resolutions and trade computation for resolution, i.e. low computation for high resolution representations and higher computation for low resolution representations. In this way the low resolution can focus on having more layers and channels, but coarsely, while the high resolution can keep all the image details, but with a smaller representation. The branches (normally two) are merged at the end of each block with linear combination at high resolution. Results for image classification on ImageNet with different network architectures and for speech recognition on Switchboard show the accuracy and speed of the proposed model.\", \"pros\": [\"The idea makes sense and it seems GPU friendly in the sense that the FLOPs reduction can be easily converted in a real speed-up\", \"Results show that the joint use of two resolution can provide better accuracy and lower computational cost, which is normally quite difficult to obtain\", \"The paper is well written and experiments are well presented.\", \"The appendix shows many interesting additional experiments\"], \"cons\": [\"The improvement in performance and speed is not exceptional, but steady on all models.\", \"Alpha and beta seem to be two hyper-parameters that need to be tuned for each layer.\"], \"overall_evaluation\": \"Globally the paper seems well presented, with an interesting idea and many thorough experiments that show the validity of the approach. In my opinion this paper deserves to be published.\", \"additional_comments\": [\"- In the introduction (top of pag. 2) and in the contributions, the advantages of this approach are explained in a different manner that can be confusing. More precisely in the introduction the authors say that bL-Net yeald 2x computational saving with better accuracy. In the contributions they say that the savings in computation can be up to 1/2 with no loss in accuracy.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"paper review\", \"review\": \"The authors propose a new CNN architecture and show results on object and speech recognition. In particular, they propose a multi-scale CNN module that processes feature maps at various scales. They show compelling results on IN and a reduction of compute complexity\", \"pros\": \"(+) The paper is well written\\n(+) The method is elegant and reproducible\\n(+) Results are compelling and experimentation is thorough\", \"cons\": \"(-) Transfer to other visual tasks, beyond IN, is missing\\n(-) Memory requirements are not mentioned, besides FLOPs, speed and parameters\\n\\nOverall, the proposed approach is elegant and clear. The impact of the multi-scale module is evident, in terms of FLOPs and performance. While their approach performs a little worse than NASNet, both in terms of FLOP efficiency and top1-error, it is simpler and easier to train. I'd like for the authors to also discuss memory requirements for training and testing the network. \\n\\nFinally, various papers have appeared over the recent years showing improvements over baselines on ImageNet. However, most of these papers are not impactful, because they do not show any impact to other visual tasks, such as detection. On the contrary, methods that do transfer get adopted very fast. I would be much more convinced of this approach, if the authors showed similar performance gains (both in terms of complexity and metrics) for COCO detection.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"extension of multi-scale network, and expected good results\", \"review\": \"The big-little module is an extension of the multi-scale module. Different scales takes different complexities: higher complexity for low-scale, and lower complexity for high scale. Two schemes of merging two branches are also discussed, and the linear combination is empirically better.\\n\\nAs expected, the results are better than ResNets, ResNexts, SEResNexts. I do not have comments except ablation study is needed to show the results for more choices of alpha, beta, e.g., alpha =1, beta =1.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SyMras0cFQ | An adaptive homeostatic algorithm for the unsupervised learning of visual features | [
"Victor Boutin",
"Angelo Franciosini",
"Laurent Perrinet"
] | The formation of structure in the brain, that is, of the connections between cells within neural populations, is by large an unsupervised learning process: the emergence of this architecture is mostly self-organized. In the primary visual cortex of mammals, for example, one may observe during development the formation of cells selective to localized, oriented features. This leads to the development of a rough representation of contours of the retinal image in area V1. We modeled these mechanisms using sparse Hebbian learning algorithms. These algorithms alternate a coding step to encode the information with a learning step to find the proper encoder. A major difficulty faced by these algorithms is to deduce a good representation while knowing immature encoders, and to learn good encoders with a non-optimal representation. To address this problem, we propose to introduce a new regulation process between learning and coding, called homeostasis. Our homeostasis is compatible with a neuro-mimetic architecture and allows for the fast emergence of localized filters sensitive to orientation. The key to this algorithm lies in a simple adaptation mechanism based on non-linear functions that reconciles the antagonistic processes that occur at the coding and learning time scales. We tested this unsupervised algorithm with this homeostasis rule for a range of existing unsupervised learning algorithms coupled with different neural coding algorithms. In addition, we propose a simplification of this optimal homeostasis rule by implementing a simple heuristic on the probability of activation of neurons. Compared to the optimal homeostasis rule, we show that this heuristic allows to implement a more rapid unsupervised learning algorithm while keeping a large part of its effectiveness. These results demonstrate the potential application of such a strategy in machine learning and we illustrate this with one result in a convolutional neural network. | [
"Sparse Coding",
"Unsupervised Learning",
"Natural Scene Statistics",
"Biologically Plausible Deep Networks",
"Visual Perception",
"Computer Vision"
] | https://openreview.net/pdf?id=SyMras0cFQ | https://openreview.net/forum?id=SyMras0cFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeE9YE9g4",
"Hkgz7dflxE",
"H1gqIX0TAm",
"BylRLBcBAQ",
"BylQtQPHRX",
"Syl4GvUHA7",
"ByxGwNP4Rm",
"rylvv96opX",
"H1eP8UR9hm",
"ryxPs1nNnQ"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545386380048,
1544722457870,
1543525202340,
1542985046356,
1542972282960,
1542969100346,
1542906969958,
1542343262683,
1541232207487,
1540829087221
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper799/Authors"
],
[
"ICLR.cc/2019/Conference/Paper799/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper799/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper799/Authors"
],
[
"ICLR.cc/2019/Conference/Paper799/Authors"
],
[
"ICLR.cc/2019/Conference/Paper799/Authors"
],
[
"ICLR.cc/2019/Conference/Paper799/Authors"
],
[
"ICLR.cc/2019/Conference/Paper799/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper799/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper799/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Submission Withdrawn by the Authors\", \"withdrawal_confirmation\": \"I have read and agree with the withdrawal statement on behalf of myself and my co-authors.\"}",
"{\"metareview\": \"This paper shows how to obtain more homogeneous activation of atoms in a dictionary. As reviewers point out, the paper is well written and indeed shows that the propose scheme results in a more uniform activation. However, the value of this contribution rests on making a case that uniformity is indeed a desirable outcome per se. As two reviewers explain, this crucial point is left unaddressed, which makes the paper too weak for ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Unclear what the benefit of this approach is\"}",
"{\"title\": \"Additional experiments make the paper stronger, but it is still not well motivated\", \"comment\": \"The authors in their revision have convinced me that their algorithm does what they say it does; and am happy to concede that \\\"I don't think the authors have demonstrated that their claims on the properties of their algorithms/formulations are generally true.\\\" is no longer valid.\\n\\n However, I am still not convinced that the presented work is relevant or useful. Statements like \\\" ... Figures 1 and 3 now show the clear qualitative advantage of using homeostasis in unsupervised learning ...\\\" are mystifying to me: *why* is there advantage? The same with \\\"may converge to a result for which the ratio of activity between the most activated and the least activated is of the order 2\\\". Why is this a problem? What order would not be a problem? The goal of unsupervised learning is rarely compression for its own sake (and even when this is the goal, measuring success in the space of images, for example, requires human evaluation). Furthermore, it is not unusual for feature representations that are optimally compressed to be less useful for other tasks. Sparse coding gained popularity in the machine learning community because it lead to SOTA algorithms in image denoising, super-resolution, and object recognition. Does this approach improve the results on any of these? Is the improvement enough to surpass modern approaches to these problems? Is there some other downstream task where the authors method makes a significant difference? The authors say that \\\"This result is often overlooked in dictionary learning and is a first novel result of the paper. \\\". My claim is that this result is not unknown; but rather, has not been generally discussed in the broader sparse coding literature because it has not been considered a serious problem. On the other hand, in the clustering literature (note that clustering is a very particular form of l_0 sparse coding), where cluster balance can be a serious problem with downstream consequences, there are many works investigating cluster balancing. \\n\\nI am changing my score to 5.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for having taken the time to judge our paper and to have detailed his judgement on their two points. We would like to point out that AnonReviewer4's final quantitative score as well as the confidence given will be crucial for the fact that this paper will or will not be presented at ICLR. We would like to respectfully detail how we completely disagree with the comments given in the two points, but acknowledge that this was mainly due to the way we presented the motivation for the paper. We hope the revised version of the paper now meets the standards for ICLR and justifies to update the \\\"red flag\\\" (clear rejection) to a green light.\\n\\nFirst, the goal is not faster computation on a CPU. Our (github-shared) code runs in a few dozens of seconds per learning on a standard laptop - but the goal is mainly to be able to test all parameters. We have not used SPAMS in this work as we could use the similar methods which are used in the sklearn library. However, SPAMS is a great inspiration for our framework. (For information, the complete simulations for this paper take approximately 12 hours --which are easily distributed on a cluster as we multiplied the number of independent learning runs using different classes of parameters, cross-validations and types of sparse coding algorithms - in total approx 500 experiments. It takes a dozens of minutes on a 100 nodes cluster.). Our motivation is mainly to understand biological vision and hope this would percolate to ML. Yes, we obtain faster convergence, but as an epiphenomenon of the better efficiency of our adaptive homeostatic algorithm. However, we agree that this was not clear in this first revision: atoms which were displayed looked qualitatively similar. We have solved this issue thanks to the comments of the anonymous reviewers by now displaying the most and least active atoms. This shows a clear distinction between different methods and an important result: when $\\\\ell_2$ normalizing atoms, dictionary learning may converge to a result for which the ratio of activity between the most activated and the least activated is of the order 2. This result is often overlooked in dictionary learning and is a first novel result of the paper. \\n\\nThis being said, Figures 1 and 3 now show the clear qualitative advantage of using homeostasis in unsupervised learning. This now certainly allow to understand *why* convergence speed is a good indicator ---not for an advantage on the running speed on a classical CPU--- but rather in showing that this allows a more efficient dictionary learning overall. Concerning the point \\\" It is not even clear that the final compression of the baselines would not be better. Even if they did show these convincingly, it is not obvious to me that it is valuable.\\\", we have performed the same experiments on more iterations such that we clearly see that baseline stay separate. Finally on the same point, we have not used at this point any application, such as supervised learning, as it is out of the scope of this paper. But we thank the reviewer for suggesting it.\\n\\nSecond, we had already done the comparison \\\"against several costs/algorithms (e.g. l_0 with OMP, l_1 with LARS, etc.), and across various N_0/sparsity penalties\\\" but we had initially omitted to include this supplementary data (that takes the form of a single jupyter notebook which allows to reproduce all results). We have now included it in an anonymized format. This supplementary material contains code to replicate all figures but also additional experiments to test the effect of the different parameters. In short, we verified that the results we present are valid over a various number of parameters of the network, like the learning rates (figure 2) but also sparsity and the size of the dictionary (see Response To AnonReviewer3 @ https://openreview.net/forum?id=SyMras0cFQ¬eId=BylQtQPHRX ). As in Sandin, 2017 paper we have shown similar results in OMP. We are in the process of extending this framework to other sparse coding algorithms (LARS and lasso_lars) as plugged in from sklearn without any modification (in theory) to these algorithms. Indeed, we should remind that our adaptive homeostasis allows to be implemented by modifying the norm of each atom of the dictionary (as was done in the original work by Olshausen). We also show in the paper the application to a one-layer convolution network and our preliminary results show that we can extend this to a hierarchical network.\\n\\nI hope that with these clarifications on the form we gave to the paper (without changing the theory behind it), the statement that \\\" I don't think the authors have demonstrated that their claims on the properties of their algorithms/formulations are generally true.\\\" could be re-assessed to allow us to share this work inspired by biology to the ICLR community.\"}",
"{\"title\": \"Response To AnonReviewer3\", \"comment\": \"We thank the reviewer and its careful reading of our paper. Concerning point 6: Indeed, we acknowledge that this type of paper may be unconventional for the audience at ICLR. But we strongly believe that scientific knowledge on biological vision is essential to work out the models that will shape DL in the future. Thus, we fully understand the rating given by the reviewer and would like to suggest that our revision addresses the main comment and show that it is relevant for a presentation at ICLR.\\n\\nFirst, we have extended the results by using the useful suggestions of AnonReviewer3 (point 3):\\nAs suggested by the reviewer we have tested how the convergence was modified by changing the number of neurons. By comparing different numbers of neurons we could re-draw the same figures for the convergence of the algorithm as in our original figures. In addition, we have also checked that this result will hold on a range of sparsity levels. In particular, we found that in general, increasing the l0_sparseness parameter, the convergence took progressively longer. Importantly, we could see that in both cases, this did not depend on the kind of homeostasis heuristic chosen, proving the generality of our results.\\n\\nThis is shown in the supplementary material that we have added to our revision (section \\\"Testing different number of neurons and sparsity\\\") . This useful extension proves the originality of our work as highlighted in point 4, and the generality of these results compared to the parameters of the network.\\n\\nSecond, the comment made in point 5 is essential: figures 1 and 3 in our first revision where not showing appropriately the qualitative improvement which is achieved in the resulting filters. Indeed, we were showing 18 atoms chosen at random from the 441 filters from the dictionary. We initially thought that this \\\"blind\\\" shuffling would be a fair representation of the data, but as revealed by point 5, this was not true. We have now changed the strategy by now showing \\\"the upper and lower row respectively show the least and most probably selected atoms.\\\" (see captions of figures 1 and 3). This now shows clearly the qualitative improvement in using a proper homeostasis and in particular that using the $\\\\ell_2$ normalization leads to the emergence of filters which are aberrant (too or not enough selective). In particular, we now show quantitatively the probability of choice of each atom - showing that most active filters are used twice more as least active ones.\\n\\nFinally, we have made an extensive pass on the manuscript to take into account the different points and make sure that this approach derived from biological vision is relevant for the audience at ICLR.\\n\\nAs such, we believe this major change in the way we present the work, both in the quality of the resulting filters and in the generality of the results, have significantly changed the scope of our work to justify its acceptance to ICLR. We thank again the reviewer for these very useful contributions to our work.\"}",
"{\"title\": \"Response To AnonReviewer1\", \"comment\": \"We thank AnonReviewer1 for the careful reading and encouraging comments.\\n\\nIndeed, the idea of a non-linear or adaptive gain normalization is novel to our knowledge, and is the main reason for deciding to submit this work to ICLR. We based our theoretical insight on an extensive reading and experience on neurophysiological data which we tried as much as possible to reconcile with the latest literature in ML/DL. \\n\\nIn particular, we think that this problem is resolved in most DL approaches using heuristics such as dropout ( http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ) or batch normalization ( https://arxiv.org/abs/1502.03167 ). We acknowledge that the objective we use (equiprobability) may seem arbitrary, but we think that 1) it best fits constraints in biological populations of neurons 2) it can be adapted to other priors on the desired probability of nodes in the network. \\n\\nIn our current revision, while keeping the same theoretical framework and simulation results, we have highlighted our main contributions: 1/ to show that $\\\\ell_2$ normalization leads to non-homogeneous data 2/ provide with an exact rule 3/ propose a simplfied heurstics and show its effectiveness. \\n\\nAlso, we have fixed the typos and minor issues (in \\\"Misc\\\") in the revision that is being uploaded to the openreview preprint server.\\n\\nThanks again for your careful reading.\"}",
"{\"title\": \"Motivation and main points on the changes operated on the presentation of the results\", \"comment\": \"We appreciate the feedback from the reviewers, especially looking at the qualitative judgments (\\\"solid work\\\", \\\"well written\\\", \\\"interesting experiments showing faster unsupervised learning\\\"). However, we would like to highlight that the quantitative evaluation (9 / 4/ 3) is not consistent. Basing selection on the mean score makes it highly improbable that, as it is, the paper could be presented at the audience of ICLR.\\n\\nWe believe that it is possible to fix the main critic (\\\"not motivated\\\", \\\"importance unclear\\\") to leverage its importance and demonstrate that it meets the standard of ICLR - and not the \\\"red flag\\\" (3 with high confidence of 5) that it should be rejected straight away :-) We acknowledge that this was a major issue and fully assume our fault : this fully lies in the readability of the paper. In particular, due to our quite unconventional way of justifying our computational choices by the current knowledge of neurophysiological processes, we understand that it is not usual in the machine learning community. Still, we also think that --without changing the theoretical background of this work-- we can change its form to allow reviewers and the conference program chairs to converge to an optimal decision on the acceptance of this paper.\\n\\nFor that, we have highlighted the main contribution: homeostasis in one form or the other is necessary for dictionary learning. When enforcing a simple $\\\\ell_2$ normalization, one may still obtain solutions for which some filters are more probable - and other more selective. Thanks to reviewers comments, we found a way to highlight this by reshuffling the atoms which we show: instead of selecting them randomly, we chose the extreme atoms (most and least probably selected). In hierarchical processing where the structural complexity of the features within each layer is preferentially homogeneous, it is undesirable to have a non uniform distribution of feature's structural complexity. We think that this knowledge from the biology of neural networks is an essential contribution to artificial networks.\\n\\nSecond, we have strengthened the qualitative evaluation of the algorithm. In the revision, we have shown results for larger datasets and longer learnings, highlighted the qualitative difference in the obtained dictionaries (just by changing the way they are displayed in the figure, see detailed responses). Importantly, we have included a supplementary material, which was absent from the first revision, and which shows the extension of this framework to other levels of sparsity, but also to different architectures. This was requested by the latest reviewer.\\n\\nAs a summary, we believe that these modifications were necessary to make the paper more impactful and we thank the reviewers for providing this essential input. We hope that this will allow us to present this quite unconventional work at ICLR.\"}",
"{\"title\": \"Well written but poorly motivated\", \"review\": \"This paper discusses the addition of a regularizer to a standard sparse coding/dictionary learning algorithm to encourage the atoms to be used with uniform frequency. I do not think this work should be accepted to the conference for the following reasons:\", \"1\": \"The authors show no benefit of this scheme except perhaps faster convergence. If faster training of dictionary learning models was a bottleneck in practical applications, this might be of interest, but it is not. SPAMS (http://spams-devel.gforge.inria.fr/) can train a model on image patches as the authors do here in a few tens of seconds on a modern computer. On the other hand, the authors give no evidence, empirical or otherwise, that their method is useful on any downstream tasks.\\nIn my view, they do not even show that the distribution of atom usage will be better with their algorithm after the learning has converged, as at least according to their learning curves, the baselines have not finished converging. It is not even clear that the final compression of the baselines would not be better. Even if they did show these convincingly, it is not obvious to me that it is valuable; the authors need to *show* that uniform usage is desirable.\", \"2\": \"The authors should compare against several costs/algorithms (e.g. l_0 with OMP, l_1 with LARS, etc.), and across various N_0/sparsity penalties, and across several datasets. The empirical evaluation is quite weak- one sparsity setting, two baselines, one dataset. Even without the \\\"train to convergence\\\" question above, I don't think the authors have demonstrated that their claims on the properties of their algorithms/formulations are generally true.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Solid work but the importance unclear\", \"review\": \"Please consider this rubric when writing your review:\\n1. Briefly establish your personal expertise in the field of the paper.\\n2. Concisely summarize the contributions of the paper.\\n3. Evaluate the quality and composition of the work.\\n4. Place the work in context of prior work, and evaluate this work's novelty.\\n5. Provide critique of each theorem or experiment that is relevant to your judgment of the paper's novelty and quality.\\n6. Provide a summary judgment if the work is significant and of interest to the community.\\n\\n1. I am a researcher working at the intersection of machine learning,\\ncomputational neuroscience and biological vision. I have experience\\nwith neural network models and visual neurophysiology.\\n\\n2. This paper develops and tests an adaptive homeostatic algorithm for\\nunsupervised visual feature learning (for example for learning models\\nof early visual processing/V1).\\n\\n3.The work spends a lot of pages describing the general problem of\\nunsupervised feature learning and the history of the base algorithms.\\nThe literature review is quite extensive. The new content appears to\\nbe in section 2.2 (Histogram Equalization Homeostasis - HEH), where a\\nsimple idea to keep all units with balanced activity over the set of\\nnatural images. The authors also develop a computationally cheaper\\nversion they call HAP (Homeostasis on Activation Probability) The\\nauthors show that their F function is optimized quicker with the HEH\\nand HAP algorithms. I would like to see how these curves vary with\\nthe number of neurons (e.g. can you add X% more neurons and get\\nsimilar convergence speed -- and if so which is more computationally\\ncostly)?\\n\\n4. Many groups have developed various homeostatic algorithms for\\nunsupervised learning, though I have not seen this exact one before.\\n\\n5. The experiments reveal the resulting receptive fields and show the \\ndecrease in the F function (error function). The resulting receptive fields\\ndo not seem that different to me between the different methods. I am also not\\nthat convinced that the faster convergence as a function of learning step is that important\\nespecially as the learning steps may be more computationally expensive for this method.\\n\\n6. I am not sure how interesting this work will be for the ICLR audience,\\nas it is not clear how important the faster convergence and more even\\nutilization of neurons is (and how it would compare computationally\\nwith just having more neurons).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well written paper, with good literature review and interesting experiments showing faster unsupervised learning\", \"review\": \"This paper proposed a bio-inspired sparse coding algorithm where iterations\\nfor dictionary updates take into account the past updates. It is argued\\nthat time takes a crucial rule in learning.\\n\\nThe paper is quite well written and contains an extensive literature review\\ndemonstrating a good understanding of previous literature in both ML/DL and biological\\nvision.\\n\\nThe idea of using a \\\"non-linear gain normalization\\\" to adjust atom selection\\nin sparse coding is interesting and as far as I know novel, while providing\", \"interesting_empirical_results\": \"The system learns in an unsupervised way faster.\", \"misc\": [\"Using < > for latex brakets is not ideal. I would recommend: $\\\\langle\\\\,,\\\\rangle$\", \"\\\"derivable\\\" I guess you mean \\\"differentiable\\\"\", \"Oliphant and Hunter are cited for Numpy/scipy and matplotlib but the\", \"reference to Pedregosa et al. for sklearn is missing.\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BklHpjCqKm | Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning | [
"Michael Lutter",
"Christian Ritter",
"Jan Peters"
] | Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system.
Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility.
The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time. | [
"Deep Model Learning",
"Robot Control"
] | https://openreview.net/pdf?id=BklHpjCqKm | https://openreview.net/forum?id=BklHpjCqKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rye6ZCeQgN",
"HkejIJNp14",
"HJe9uxpo1N",
"H1lsBkAKJE",
"HJg1lShYk4",
"SygM8NcFRm",
"BJgEjeIxR7",
"SJe1dTrlCm",
"S1xsD_SxA7",
"BJlOB02JRm",
"SJxMlEq-pm",
"H1exdKrp27",
"S1xd1wOo2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544912389031,
1544531794780,
1544437874289,
1544310594687,
1544303846558,
1543246921797,
1542639771760,
1542638951134,
1542637666788,
1542602303523,
1541673961576,
1541392744509,
1541273312235
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper798/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper798/Authors"
],
[
"ICLR.cc/2019/Conference/Paper798/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper798/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper798/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper798/Authors"
],
[
"ICLR.cc/2019/Conference/Paper798/Authors"
],
[
"ICLR.cc/2019/Conference/Paper798/Authors"
],
[
"ICLR.cc/2019/Conference/Paper798/Authors"
],
[
"ICLR.cc/2019/Conference/Paper798/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper798/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper798/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper798/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper looks at a novel form of physics-constrained system identification for a multi-link robot,\\nalthough it could also be applied more generally. The contributions is in many simple; this is seen\\nin a good light (R1, R3) or more modestly (R2). R3 notes surprise that this hasn't been done before.\\nResults are demonstrated on a simualted 2-dof robot and real Barrett WAM arm, better than a pure\\nneural network modeling approach, PID control, or an analytic model. \\n\\nSome aspects of the writing needed to be addressed, i.e., PDE vs ODE notations. \\nThe point of biggest concern is related to positioning the work relative to other system-identification\\nliterature, where there has been an abundance of work in the robotics and control literature.\\nThere is no final consensus on this point for R3; R3 did not receive the email notification of the author's detailed reply,\\nand notes that the author has clarified some respects, but still has concerns, and did not have time to further\\nprovide feedback on short notice. \\n\\nIn balance, the AC believes that this kind of constrained learning of models is underexplored, and\\nnotes that the reviewers (who have considerable shared expertise in robotics-related work) believe\\nthat this is a step in the right direction and that it is surprising this type of approach has not\\nbeen investigated yet. The authors have further reconciled their work with earlier sys-ID work, and\\ncan further describe how their work is situated with respect to prior art in sys-ID (as they do in\\ntheir discussion comments). The AC recommends that: (a) the abstract explicitly mention \\\"system\\nidentification\\\" as a relevant context for the work in this paper, given that the ML audience should\\nbe (or can be) made aware of this terminology; and (b) push more of the math related to the\\ndevelopment of the necessary derivatives to an appendix, given that the particular use of the\\nderivations seems to be more in support of obtaining the performance necessary for online use,\\nrather than something that cannot be accomplished with autodiff.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"lean in favor; one reviewer who lacked time to further evaluate author responses due to failed email notification\"}",
"{\"title\": \"We will update the section about the cable system.\", \"comment\": \"Once we can update the paper, we will make this statement clearer and include the modelling as flexible joint, i.e., as two joint coupled by a massless spring. Furthermore, we will also include that this is not possible with the Barrett WAM as one cannot sense the motor positions.\\n\\nThanks for bringing this to our attention.\"}",
"{\"title\": \"Stand by original evaluation\", \"comment\": \"Neither the comments of the other reviewers nor the response of the authors gives me reason to change my evaluation.\\n\\nThe paper is a small but interesting idea step towards implementing a physics-prior in model learning. Future work should focus on implementing this approach for more complex system, and find out how to scale this approach.\", \"slight_remark\": \"as pointed out in my earlier review, the cable-systems do not violate the physics prior. While I understand that it was not possible to run the suggested new experiments, the error in the text should be corrected.\"}",
"{\"title\": \"In favor of acceptance, because it's a good first step.\", \"comment\": \"I've discussed this paper in a reading group with colleagues (without mentioning that I was reviewing it) to get some more opinions and to discover potential flaws.\\nThe general sentiment was that this method can be difficult to apply in practice, because it has stringent requirements that can be hard to meet with real, systems (e.g. a legged robot). The results show only minor improvements over PD controllers and inverse dynamics controllers, however this might be due to the simplicity of the experiments (2D robot arm). \\nThat being said, the paper is certainly a step in the right direction and I'm in favor of accepting this paper. The method is sound and simple and the authors present hardware and simulation results. It's a simple framework for others to build upon.\"}",
"{\"title\": \"reviewers -- comments on author's revisions?\", \"comment\": \"We are reaching the end of the discussion period.\\nThere remain some mixed opinions on the paper.\\n\\nThe authors have provided detailed replies.\\nAny further thoughts from the reviewers, in response to those?\\n\\nStating pros + cons and summarizing any changes in opinion would be greatly appreciated.\\n\\nWe acknowledge that reviewer & author time is limited.\\n-- area chair\"}",
"{\"title\": \"Added the offline benchmark including the suggested system identification approach\", \"comment\": \"Dear Reviewer 3,\\n\\nwe have added an offline comparison to the Appendix. \\\"Appendix A Offline Benchmarks\\\" compares the performance of DeLaN to the system identification approach introduced by Atkeson et. al. [1], a feed-forward neural network and the recursive Newton Euler algorithm using an analytic model. For this comparison the models as trained offline and evaluated using the mean squared error (MSE) on the training and test set. \\n\\nWe added this comparison to the Appendix as we think that the tracking error computed using online learning is the relevant performance indicator and not the offline MSE. We are currently running the online experiments and we will add the results of the system identification approach to the paper as soon as the results become available. \\n\\nWe would be very happy, if you could have another look at these results and let us know how we can further improve the paper.\\n\\n[1] Atkeson, C. G., An, C. H., & Hollerbach, J. M., 1986. Estimation of inertial parameters of manipulator loads and links. The International Journal of Robotics Research, 5(3), 101-119.\"}",
"{\"title\": \"Thank your for the review, but we disagree with some points and hope to clarify these aspects.\", \"comment\": \"We thank the reviewer for the extensive evaluation. We have updated the paper to precisely differentiate between PDE and ODE and we updated the related work section to explain weaknesses of previous approaches and highlight the differences to existing model learning / system identification (SI) approaches.\\n\\nIn addition, we want to clarify the brought-up points below. If you have further questions, please feel free to ask. \\n\\n1)\\nSI as described in the textbooks or the survey by Wu et. al. [1] - which is btw. missing key references to state-of-the-art methods such as [2, 3] - is non-trivial and hard for real robots. Our lab has significant experience performing model learning on several robot arms, legged robots and robotic hands. However, using state of the art SI [2,3], we learned dynamics parameters, that did NOT outperform the analytical model of the WAM. Therefore, we only use the analytical model as baseline within the paper. Furthermore, we did evaluate our approach against standard black-box SI methods, as the feed-forward neural network is a standard SI technique, which is also mentioned by Wu. et. al. [1].\\n\\nIn addition, we disagree with your statement, that we did not put our work in the proper context. We related our approach to the extensive research covering model learning. Model learning is much broader than SI, as SI is commonly used to refer to model learning with known basis functions. Therefore, we do not limit our comparison to SI but provide a wider context with model learning. Furthermore, the classic SI described by Atkeson et. al. [4] has many limitations (e.g. not applicable to closed-loop kinematics) and will most likely not infer the actual dynamics parameters. As pointed out by Ting et. al. [2] and Nakanishi et. al. [3], the inferred parameters are not guaranteed to yield a positive definite inertia matrix or satisfy the parallel axis theorem - both aspects are ignored within the proposed survey [1] -. In contrast DeLan is guaranteed to yield a physical plausible model, can be applied to any kinematic structure and does not require any knowledge about kinematics.\\n\\nI hope we could clarify the problems of standard SI and the consequences for real robot models. Furthermore, we are working to provide empirical data. However, we need to re-implement the features derived by the Newton-Euler formulation as our underlying robotics libraries changed and these features require the computation for all transformations, Jacobians along the complete kinematic chain. If you are aware of a public implementation using URDFs as robot descriptors, please let us know.\\n\\n2) \\nFirst, we think that reporting the derivatives is good scientific practice and second the analytic computation of the derivatives is necessary for the real-time application. The usage of automatic differentiation in PyTorch does not allow the computation of the feedforward torque with 200Hz. As pointed out in these discussions (https://discuss.pytorch.org/t/how-to-compute-jacobian-matrix-in-pytorch/14968/7, https://stackoverflow.com/questions/43451125/pytorch-what-are-the-gradient-arguments/47026836) the computation of the partial derivatives w.r.t. to network input does not scale well to high-dimensions. If you would prefer to have these derivations within the Appendix, we can also put the derivations within the Appendix.\\n\\n3) \\nSorry for being imprecise with the PDE notation. We have updated the paper to be more precise when the equations are referring to a PDE or an ODE. \\n\\nWe also want to point out that Eq. 4 is NOT \\\"just the standard manipulator equations\\\", Eq. 4 applies to any non-relativistic multi-particle system, which can be described with holonomic constraints. Therefore, the Lagrangian Mechanics formalism is applicable for closed-loop kinematic chains, where the standard Newton-Euler approaches fail. Furthermore, most literature related to manipulator equations ignores the functional dependency between C and H, while DeLan explicitly models this functional dependency. We have updated the description to clarify the differences.\\n\\n[1]Wu, J., Wang, J. and You, Z., 2010. An overview of dynamic parameter identification of robots. Robotics and computer-integrated manufacturing, 26(5), pp.414-419.\\n\\n[2] Ting, J. A., Mistry, M., Peters, J., Schaal, S., & Nakanishi, J., 2006. A Bayesian Approach to Nonlinear Parameter Identification for Rigid Body Dynamics. In Robotics: Science and Systems, pp. 32-39.\\n\\n[3] Nakanishi, J., Cory, R., Mistry, M., Peters, J. and Schaal, S., 2008. Operational space control: A theoretical and empirical comparison. The International Journal of Robotics Research, 27(6), pp.737-757.\\n\\n[4] Atkeson, C. G., An, C. H., & Hollerbach, J. M., 1986. Estimation of inertial parameters of manipulator loads and links. The International Journal of Robotics Research, 5(3), 101-119.\"}",
"{\"title\": \"Thank you for the review and the comments regarding the kinematic structure.\", \"comment\": \"Thank you for your extensive review. Your question regarding closed-loop kinematics chains sparked interesting discussions yielding additional advantages of the approach. We have fixed the issues you mentioned in the figures.\\n\\n1) \\nYes, there are no constraints on the decomposition of the torque. This decomposition is unsupervised and could yield degenerate solutions. From our experience, degenerate solutions are learned if one of the components - either H or g - dominates during initialisation. Tuning the hyperparameters for the initialization, i.e., the variance of the gaussian distribution initializing the weights, one achieves a good decomposition into g and H. \\n\\nTo incorporate external forces, one has two options. First, conservative forces, e.g., joints coupled by springs, can be added to V, i.e., V = V_g + V_p. If the external forces are non-conservative, e.g., contact forces, one must decompose \\\\tau. Commonly \\\\tau is decomposed into \\\\tau = \\\\tau_{friction} + \\\\tau_{actuator} + \\\\tau_{external}. The external forces must be projected to the generalized coordinates using \\\\tau_{external} = J_p^{T} f, where f are the external forces acting on point p and J_p the jacobian. \\n\\n\\n2) \\nThis depends on the exact definition of partial observability (PO):\\n\\n- If one interprets PO as observing the state with noise and no direct sensing of the accelerations, DeLan can learn the dynamics using noisy observations and approximated accelerations using finite differences. \\n\\n- If one interprets PO as missing sensor measurements of a single generalized coordinate, DeLan will not be able of the learning the dynamics. Furthermore, such partial observability would violate the underlying assumption of Lagrangian Mechanics as the system input does not represent generalized coordinates. \\n\\n- If one interprets PO as an over constrained observation, i.e. a high-dimensional signal that encodes the low-dimensional state, one could learn a latent space embedding, whereas dynamics in the latent space are described by Lagrangian mechanics. \\n\\n\\n3) \\nAs the Euler-Lagrange equation (Eq. 3) applies to vibrations and soft robotics, where the state dimensionality is not finite. One could apply an extension of the current approach to soft robotics. We represent the kinetic energy as T = 1/2 \\\\dot{q} H(q) \\\\dot{q}, which applies to system with finite particles. For soft robotics, one would need to represent the kinetic energy as continuous function. Therefore, the Lagrange Euler equation would not simplify to an ODE and one would need to incorporate the PDE. We are currently exploring this direction and don't see any structural problems. \\n\\n\\n4)\\nThank you for bringing up the different kinematic structures. The problems of closed-loop kinematics are mainly due to use of the Newton-Euler formalism. In contrast, the Lagrangian Mechanics formalism applies to any non-relativistic multi-particle system with holonomic constraints. As closed-loop kinematics only require holonomic constraints, learning the dynamics of closed-loop kinematics can be achieved with DeLan. Older works [1, 2, 3] used Lagrangian Mechanics to manually derive the dynamics of closed-loop kinematics. \\n\\nCurrently, we are looking for publicly available model files of parallel robots (*.sdf or *.urdf) and try to include the evaluations. Up to now, we were not able to find such robot description. If you are aware of such models, we would appreciate your help. \\n\\nRegarding the contact dynamics. If one can observe the contact force and the point of contact, one can include the contact forces within the learning (see point (1)). If neither is known the learning would be too ambiguous. However, if one has learned the contact-free dynamics, one can compute the external forces on the end-effector and perform force-control without additional sensors. Using system identification this sensorless force control has been done by Wahrburg et. al. [4]. \\n\\n\\n5) \\nYes, we are definitely planning on exploring this approach in future work. We want to use the forward model for planning and compare the performance to black-box model learning. When using the forward model, we will compare to the recent work from Deepmind and other authors. \\n\\n\\n[1] Miller, K., 1992. The Lagrange-based model of Delta-4 robot dynamics. Robotersysteme, 8, pp.49-54.\\n\\n[2] Liu, K., Lewis, F., Lebret, G., & Taylor, D., 1993. The singularities and dynamics of a Stewart platform manipulator. Journal of Intelligent and Robotic Systems, 8(3), pp.287-308.\\n\\n[3] Geng, Z., Haynes, L.S., Lee, J.D. and Carroll, R.L., 1992. On the dynamic model and kinematic analysis of a class of Stewart platforms. Robotics and autonomous systems, 9(4), pp.237-254.\\n \\n[4] Wahrburg, A., B\\u00f6s, J., Listmann, K. D., Dai, F., Matthias, B., & Ding, H., 2018. Motor-Current-Based estimation of cartesian contact forces and torques for robotic manipulators and its application to force control. IEEE Transactions on Automation Science and Engineering, 15(2), pp.879-886.\"}",
"{\"title\": \"Thank you for the extensive review, we have clarified the paper and addressed your questions in the comment\", \"comment\": \"Thank you for providing such extensive review and raising these important questions. If you have further questions, please feel free to ask.\\n\\n1) \\nSorry that we have been imprecise on the naming convention. We have updated the paper to make the differences clearer. We replaced Lagrange-Euler PDE with Lagrange-Euler equation and highlight that Eq. 3 can be either a PDE or ODE while Eq. 4 is an ODE. After, Eq. 4 we removed the term PDE. Within the related work section, we use the PDE terminology if the references refer to PDEs. \\n\\n\\n2) \\nThank you for bringing up this point as this sparked further discussions and new research questions. Until now we have learned the potential forces g(q) directly as this is standard in robotic applications and for our experiments U(q) is not required. However, if one learns dU/dq and U simultaneously one could extend the cost-function with energy conservation and derive energy-based controllers. \\n\\nA quick offline verification on the simulated WAM data showed that learning dU/dq is possible but currently achieves lower performance. Right now, we cannot conclude if this lower performance is due to hyperparameter settings. Therefore, we are running hyperparameter sweeps to compare the differences. \\n\\n\\n3) \\nYes, one could just incorporate \\\\dot{q} within g and let g model a mixture of gravity and friction. However, this would contradict Lagrangian Mechanics, where g is the derivative of a E_pot. We would add friction by decomposing \\\\tau into the subparts \\\\tau = \\\\tau_{motor} + \\\\tau_{external} + \\\\tau_{friction}. Simple friction models can be added using the Rayleigh-dissipation function (https://en.wikipedia.org/wiki/Rayleigh_dissipation_function). However, as described by Albu-Sch\\u00e4fer [1] a \\\"good\\\" friction model for robots is described by \\\\tau_{friction} = f(\\\\q, \\\\qd, \\\\tau). Adding such friction model to Lagrangian mechanics is non-trivial. Especially, due to the torque dependency, the computation of the inverse dynamics is challenging. Therefore, incorporating friction would require answering the questions what a sufficiently good friction model is and require an extensive empirical comparison of multiple friction models, which would beyond the scope of this paper.\\n\\n\\n4) \\nYes, thank you for bringing this to our attention. We agree that performing such experiments would be interesting. Especially for robots with adaptive stiffness as by Braun et. al. [2]. The Barrett WAM does not provide separate motor (\\\\theta) and joint (q) positions. We are looking into performing such experiments in simulation. Below we discuss the theoretic complexity, practical relevance for the performed experiments and implementation difficulties. \\n\\nFrom a theoretical perspective including this within the learning should not be too difficult. Rather than learning f^{-1}(q, qd, qdd) = \\\\tau one would learn f^{-1}(q, qd, qdd) - K (\\\\theta - q) = 0 where K is a diagonal matrix with positive entries. The structure of DeLan could be easily adapted for this. We will try to also show this in a small experiment. \\n\\nFrom a practical perspective using this model within the controller is more complex. As described in Equation 13.16 in the Springer Handbook of Robotics [3], the inverse model contains d^3 H/d^3t, d^4q/d^4t, d^2g/d^2t, d^2 (d(qd^T H qd)/ dq )/d^2t etc.. Therefore, one would need to compute the higher-order derivatives, which will cause numerical issues that in our opinion would do more harm than help. However, we definitely agree that such model would help when planning using forward model.\\n\\nFrom the simulation perspective, we are using PyBullet. To the best of our knowledge, one cannot simulate coupled joints with Pybullet. Therefore, one has two options. First, simulate the spring outside of PyBullet but this would risk the divergence of the integration of \\\\qdd (PyBullet) and \\\\theta_dd (non-PyBullet). Second one could replace PyBullet with MuJoCo, which can simulate coupled joints but this would require significant implementation effort. Currently, we are evaluating the effort of both. \\n\\n\\n5) \\nYes, one could add this soft constraint as penalty term. However, the computational overhead of the derivatives is minimal. The derivative computation (Section 4.2 & 4.3) only requires one clamping operation (for the ReLU non-linearity) and one matrix multiplication per hidden layer. This overhead does not hinder the real-time computations and hence, we prefer to use the hard constraint rather than the soft-constraint.\\n\\n\\n[1] Alin Albu-Sch\\u00e4ffer. Regelung von Robotern mit elastischen Gelenken am Beispiel der DLRLeichtbauarme. PhD thesis, Technische Universit\\u00e4t M\\u00fcnchen, 2002.\\n\\n[2] Braun, D. J., Howard, M., & Vijayakumar, S., 2012). Exploiting variable stiffness in explosive movement tasks. Robotics: Science and Systems VII, 25.\\n\\n[3] Siciliano, B., & Khatib, O. (Eds.)., 2016. Springer Handbook of Robotics. Springer.\"}",
"{\"title\": \"further author feedback, or reviewer thoughts after reading the other reviews?\", \"comment\": \"Thank you to the reviewers for the extensive reviews.\", \"authors\": \"We welcome a response.\", \"reviewers\": \"We do have high variation in the overall paper evaluation, as reflected in the assigned scores. Do the other reviews change your evaluation of the paper?\\n\\n-- your area chair\"}",
"{\"title\": \"Interesting paper on using Lagrangian formulation to speed up learning of robot model\", \"review\": \"This paper discusses learning of robot dynamics models. They propose to learn the mass-matrix\\nand the potential forces, which together describe the Lagrangian mechanics of the robot. The unknown\\nterms are parametrized as a deep neural network, with some properties (such as positive definiteness)\\nhard-coded in the network structure. The experimental results show the learned inverse model being used\\nas the feed-forward term for controlling a physical robot. The results show that this approach lead to faster\\nlearning, as long as the model accurately describes the system. The paper is well written and seems free\\nof technical errors. The contribution is modest, but relevant, and could be a basis for further research. Below\", \"are_a_few_points_that_could_be_improved\": \"1) The paper uses the term partial differential equation in a non-standard way. While Eqs. 4/5 contain partial derivatives,\\nthe unknown function is q, which is a function of time only. Therefore, the Lagrangian mechanics of robot arms are seen\\nas ordinary differential equations. The current use of the PDE terms should be clarified, or removed.\\n2) It is not made clear why the potential forces are learned directly, rather than as a derivative of the potential energy. Could you discuss the advantages/disadvantages? \\n3) Somewhat related to the previous point: the paper presents learning of dissipative terms as a challenge for future works. Given that the formulation directly allows to add \\\\dot{q} as a variable in g, it seems like a trivial extension. Can you make clearer why this was not done in this paper (yet).\\n4) The results on the physical robot arm state that the model cannot capture the cable dynamics, due to being a rigid body model. However, the formulation would allow modelling the cables as (non-linear) massless springs, which would probably\\nexplain a large portion of the inaccuracies. I strongly suggest running additional experiments in which the actuator and joints have a separate position, and are connected by springs. If separate measurements of joint-position and actuator position are not available on the arm, it would still be interesting to perform the experiments in simulation, and compare the\\nperformance on hardware with the analytical model that includes such springs.\\n5) The choice is made to completely hardcode various properties of the mass matrix into the network structure. It would be possible to make some of these properties softcoded. For instance, the convective term C(q,\\\\dot{q})\\\\dot{q} could be learned separately, with the property C + C^T = \\\\dot{H} encoded as a soft constraint. This would reduce the demand on computing derivatives online.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice approach, but needs to be situated within the relevant work much better\", \"review\": \"This paper looks at system identification for a multi-link robot based upon combining a neural network with the manipulator equations. Specifically, the authors propose to model the robot dynamics using the typical manipulator equations, but have a deep neural network parameterize the H(q) and g(q) matrices. They illustrate that the method can control the systems of a simualted 2-dof robot and real Barrett WAM arm, better than a pure neural network modeling approach, PID control, or an analytic model.\\n\\nOverall, I think there is a genuinely nice application in this paper, but it's not sufficiently compared to existing approaches nor put in the proper context. There is a lot of language in the paper about encoding the prior via a PDE, but really what the authors are doing is quite simple: they are doing system identification under the standard robot manipulator equations but using a deep network to model the inertia tensor H(q) and the gravity term g(q). Learning the parameters that make up H(q) and g(q) is completely standard system identification in robotics, but it's interesting to encode these as a generic deep network (I'm somewhat surprised this hasn't been done before, though a quick search didn't turn up any obvious candidates). However, given this setting, there are several major issues with the presentation and evaluation, which make the paper unsuitable in its current from.\\n\\n1) Given the fact that the authors are really just in the domain of system identification and control, there are _many_ approaches that they should compare to. At the very least, however, the authors should compare to standard system identification techniques (see e.g., Wu et al., \\\"An overview of dynamic parameter identification of robots\\\", 2010, and references therein). This is especially important on the real robot case, where the authors correctly mention that the WAM arm cannot be expressed exactly by the manipulator equations; this makes it all the more important to try identify system parameters via a data-driven approach, not with the hope of finding the exactly \\\"correct\\\" manipulator equations, but with finding some that are good enough to outperform the \\\"analytical\\\" model that the authors mention. It's initially non-obvious to me that a generic neural network to model the H and g terms would do any better than some of these standard approaches.\\n\\n2) A lot of the derivations in the text are frankly unnecessary. Any standard automatic differentiation toolkit will be able to compute all the necessary derivatives, and for a paper such as this the authors can simply specify the architecture of the system (that they use a Cholesky factorization representation of H, with diagonals required to be strictly positive) and let everything else be handled by TensorFlow, or PyTorch, etc. The derivations in Sections 4.2 and 4.3 aren't needed.\\n\\n3) The authors keep referring to the Lagrangian equations as a PDE, and while this is true in general for the actual form here it's just a second order ODE; see e.g. https://en.wikipedia.org/wiki/Lagrangian_mechanics. Moreover, these are really just the standard manipulator equations for multi-link systems, and can just be denoted as such.\\n\\nDespite these drawbacks, I really do like the overall idea of the approach presented here, it's just that the authors would need to _substantially_ revise the presentation and experiments in order to make this a compelling paper. Specifically, if they simply present the method as a system identification approach for the manipulator equations, with the key terms parameterized by a deep network (and compare to relevant system identification approaches), I think the results here would be interesting, even if they would probably be more interesting to a robotics audience rather than a core ML audience. But as it is, the paper really doesn't situation this work within the proper context, making it quite difficult to assess its importance or significance.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Promising, simple approach to model learning. Some questions regarding generalizability to systems with complex dynamics.\", \"review\": [\"I like the simplicity of the approach in this paper (especially compared to very computationally hungry methods such as Deepmind's \\\"Graph Networks as Learnable Physics Engines for Inference and Control\\\"). The fact that the approach allows for online learning is also interesting. I very much appreciate that you tested your approach on a real robot arm!\", \"I have a number of questions, which I believe could help strengthen this paper:\", \"The decomposition of H into L^TL ensures H is positive definite, however there are no constraints on g (gravity/external forces). How do you ensure the model doesn't degenerate into only using g and ignoring H? In the current formulation g only depends on q, however this seems insufficient to model velocity dependent external forces (e.g. contact dynamics). Please elaborate.\", \"How would you handle partial observability of states? Have you tried this?\", \"How would you extend this approach to soft robots or robots for which the dimensionality of the state space is unknown?\", \"Have you tested your method on systems that are not kinematic chains? How would complex contact dynamics be handled (e.g. legged robots)?\", \"It would be interesting to see more comparisons with recent work (e.g. Deepmind's).\", \"Some figures (e.g. Figure 6) are missing units on the axes. Please fix.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rkxraoRcF7 | Learning Disentangled Representations with Reference-Based Variational Autoencoders | [
"Adria Ruiz",
"Oriol Martinez",
"Xavier Binefa",
"Jakob Verbeek"
] | Learning disentangled representations from visual data, where different high-level generative factors are independently encoded, is of importance for many computer vision tasks. Supervised approaches, however, require a significant annotation effort in order to label the factors of interest in a training set. To alleviate the annotation cost, we introduce a learning setting which we refer to as "reference-based disentangling''. Given a pool of unlabelled images, the goal is to learn a representation where a set of target factors are disentangled from others. The only supervision comes from an auxiliary "reference set'' that contains images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak supervisory signal provided by the reference set. During training, we use the variational inference framework where adversarial learning is used to minimize the objective function. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from minimal supervision.
| [
"Disentangled representations",
"Variational Autoencoders",
"Adversarial Learning",
"Weakly-supervised learning"
] | https://openreview.net/pdf?id=rkxraoRcF7 | https://openreview.net/forum?id=rkxraoRcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byx9HkJUXN",
"Sygjczi-xE",
"BkxDCPGjR7",
"r1l5xmMv07",
"rklyCH6I67",
"Hkxy6H6Ia7",
"rJgnOS6IT7",
"Hkg-PraIaX",
"Skx6zH6UaQ",
"ByxvxBTI6X",
"S1eRWHHgTQ",
"rJeRCc9t3m",
"r1gtKLnOhX"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548246850435,
1544823443490,
1543346126718,
1543082738117,
1542014406854,
1542014391188,
1542014324014,
1542014296979,
1542014228882,
1542014191506,
1541588229683,
1541151446509,
1541092992906
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/Authors"
],
[
"ICLR.cc/2019/Conference/Paper797/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper797/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper797/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Response to the AC metareview overriding reviewer consensus\", \"comment\": \"We would like to thank again all the reviewers and the area chair for their participation during the reviewing period. We honestly believe that it significantly contributed to improve our work. Being said that and given the nature of OpenReview, we would like to provide a response to the AC metareview which resulted in a reject decision. We think that his/her arguments were too limited to warrant over-riding the reviewer consensus for acceptance. In summary, from our point of view, the reasons appear vague, arbitrary in some aspects and/or were never raised during the reviewing period:\\n\\n\\n\\n1 (AC) \\u201cthe quantitative results with weak supervision are not a big improvement over beta-vae-like methods or Mathieu et al.\\u201d\\n \\n\\u2192 It is not clear how \\u201cbig improvements\\u201d should be for the AC not to be used as a reason to reject a paper. Experimentally, we find that our method improves over all the compared state-of-the-art methods on the evaluated datasets. The comparison with Mathieu et. al was added according to a suggestion by AnonReviewer2, who raise his/her score based on the provided results. Additionally, we show how our model can be applied to different problems (conditional image generation and attribute transfer) which can not be addressed using the compared methods.\\n \\n\\n\\n2 (AC) \\u201ca red flag of sorts to me is that it is not very clear where the gains are coming from: the authors claim to have done a fair comparison with the various baselines, but they introduce an entirely new encoder/decoder architecture that was likely (involuntarily, but still) tuned more to their method than others.\\u201d\\n \\n\\u2192 This point was not raised in any of the reviews and, therefore, we had not the opportunity to clarify it. As described in the paper, our networks are based on a standard conv/deconv networks with upsampling and downsampling operations. This is the most standard architecture used in VAE-based models and GAN literature. Moreover, our architecture uses exactly the same building blocks as [Karras et. al, 2018]. We think that characterising this as an \\u201centirely new encoder/decoder architecture\\u201d is unreasonable. We did not explore other types of architecture designs and all the evaluated methods used the same hyper-parameters without any tuning.\\n \\n\\n\\n3 (AC) \\\"the setup as presented is somewhat artificial and less general than it could be (however, this was not a major factor in my decision). It is easy to get confused by the kind of disentangled representations that this work is aiming to get.\\\"\\n \\n\\u2192 This point was raised by AnonReviewer1 during the revision. We clarified our concept of disentanglement and discussed different potential applications of our setup. We updated our paper accordingly. The reviewer was convinced by our response and raised his score to (Good paper, accept). No additional comments on this issue are provided by the AC.\\n \\n\\n\\n4 (AC) \\\"I think this has the potential to be a solid paper, but at this stage it's missing a number of ablation studies to truly understand what sets it apart from the previous work. At the very least, there is a number of architectural and training choices in Appendix D -- like the 0.25 dropout -- that require more explanation / empirical understanding and how they generalize to other datasets.\\\"\\n \\n--> No description about the \\u201cnumber of ablation studies\\u201d is provided. A single concrete point is raised here: that we do not provide data on how the used dropout-rate of 0.25 generalizes to other datasets. First of all, we did not fine-tune this hyper-parameter given that we found that the default value worked well in all our experiments. Moreover, the lack of in-depth evaluation of how the used dropout rates generalize to other datasets seems plentiful across the papers in ICLR/ICML/NIPS/CVPR/ICCV/ECCV that use dropout strategies, and does not appear to be a commonly accepted reason to reject papers from publication. Finally, it is worth to clarify that the \\u201cdropout strategy\\u201d is only used in our model (because it is only applied when the discriminator uses features as input) and therefore, the results of the other baselines are not affected by this hyper-parameter.\"}",
"{\"metareview\": \"This is a proposed method that studies learning of disentangled representations in a relatively specific setting, defined as follows: given two datasets, one unlabeled and another that has a particular factor of variation fixed, the method will disentangle the factor of variation from the others. The reviewers found the method promising, with interesting results (qual & quant).\", \"the_weaknesses_of_the_method_as_discussed_in_the_reviews_and_after\": [\"the quantitative results with weak supervision are not a big improvement over beta-vae-like methods or mathieu et al.\", \"a red flag of sorts to me is that it is not very clear where the gains are coming from: the authors claim to have done a fair comparison with the various baselines, but they introduce an entirely new encoder/decoder architecture that was likely (involuntarily, but still) tuned more to their method than others.\", \"the setup as presented is somewhat artificial and less general than it could be (however, this was not a major factor in my decision). It is easy to get confused by the kind of disentagled representations that this work is aiming to get.\", \"I think this has the potential to be a solid paper, but at this stage it's missing a number of ablation studies to truly understand what sets it apart from the previous work. At the very least, there is a number of architectural and training choices in Appendix D -- like the 0.25 dropout -- that require more explanation / empirical understanding and how they generalize to other datasets.\", \"Given all of this, at this point it is hard for me to recommend acceptance of this work. I encourage the authors to take all this feedback into account, extend their work to more domains (the artistic-style disentangling that they mention seems like a good idea) and provide more empirical evidence about their architectural choices and their effect on the results.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"Thank you for updating your review\", \"comment\": \"We are glad to see that the reviewer has considered to update his/her score after reading our rebuttal and paper revision. Please, let us know if there is any remaining clarification that you would like us to provide.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/2]\", \"comment\": \"***(R1.Q2) Definitions of disentangled representations. \\n\\nWe believe that reviewer\\u2019s concern is caused by a different interpretation of \\u201cdisentanglement\\u201d compared to ours. If we have correctly understood, the reviewer refers to a specific definition of disentangled representation implying a bijective mapping between one generative factor and a single dimension of a the latent representation (i.e, feature vector). Despite the fact that this definition has been adopted by recent unsupervised approaches [Higgins et. al, 2017, Kumar et. al, 2018] focusing on the disentanglement of simple generative factors, we think that this view is not appropriate for more challenging problems. For example, it is unrealistic to expect that a high-level factor such as the facial expression can be modelled by a unique continuous value. \\n\\nIn our work, we adopt a more flexible interpretation of \\u201cdisentanglement\\u201d, where the information of a complex high-level factor of variation (e.g the digit style) can be encoded into a subset of dimensions of the latent representation (i.e, the vector e in our model). Note that we can consider this complex generative factor to be a composition of simpler transformations (e.g, color, size, width) which, indeed, can be entangled in the vector e. \\n\\nIn this scenario, the disentanglement arise from the fact that the rest of factors non-related with the style are encoded into a separate set of dimensions of the latent representation (i.e, the vector z). Under this assumptions, our definition of disentangling is coherent with reviewer\\u2019s statement: \\u201cwhat it really addresses is separating two sets of factors into different parts of the representation\\u201d. In fact, we think that the word \\u201cseparating\\u201d could be replaced by \\u201cdisentangling\\u201d without modifying the implications. Note that our notion of disentangled representation has been previously employed in other works where a complex high-level generative factor is disentangled from the rest (e.g. face identity in [Donahue et al., 2018]). Being said this, we agree that the sentence : \\u201cLearning disentangled representations from visual data, where high-level generative factors correspond to independent dimensions of feature vectors\\u2026\\u201d can be misleading. For this reason, we have rephrased it in the updated version of the paper. \\n\\n\\n\\n***(R1.Q3) Evaluation procedure actually measuring disentanglement.\\n\\nFollowing the discussed interpretation of disentangled representation, we think that the followed evaluation procedure is appropriate in order to effectively measure the level of \\u201cdisentanglement\\u201d. This is because, as stated by the reviewer, our model \\u201chas to show that the information exists in the correct part of the space\\u201d (i.e, in the latent variable e and not in z) and, therefore, that the target factors are disentangled from the rest.\"}",
"{\"title\": \"Response to AnonReviewer2 [2/2]\", \"comment\": \"***(R2.Q4) Need of the KL terms in (5). \\u201c...What would happen if these KL terms in (5) are dropped and one simply uses SGVB to optimise the resulting loss without the need for discriminators?...\\u201d\\n\\nWe agree with the reviewer that modelling density ratios with logistic regression can be problematic and should be avoided whenever possible. However, the use of the discriminators in our method is crucial for a main reason. As stated by the reviewer, using only the reconstruction terms over the latent variables forces the model to encode information into e. However, this information does not need to be related with the target factors (i.e, the ones that are not present in the reference set of images). More concretely, the model can learn to encode most of the information into z and place \\u201creconstructable\\u201d but non-relevant information into e. This is clearly avoided when using the discriminator d(x,z) because neutral images generated from p(x|z,e^r)p(z) are forced to be similar to real \\u201creference images\\u201d. As a consequence, z can not contain information about target factors, which must be encoded into e.\\n\\nIn order to gain more insights about this issue, we have conducted the suggested ablation study by removing the discriminators of sRB-VAE during training. By following the same experimental setup described in Sec. 5.3, the average performance of the ablated model according to the metrics shown in Table 1 are .371 and .202 for the AffectNet and MNIST respectively. Note that these results are much worse than the ones obtained with our proposed model sRBD-VAE. By visually inspecting the generated samples, we have observed that manipulating the vector e does not significantly modify the images in terms of the target factor. As previously discussed, this shows that the use of the reconstruction losses over the latent space is not enough by itself in order to solve the reference-based disentanglement problem. \\n\\nTo conclude, the reviewer is also referred to our response to Reviewer 1 (see R3.Q1) where we describe another advantage of using the discriminator d(x,z,e). We have added a discussion about these issues in subsection \\u201cOptimization via Adversarial Learning\\u201d of the revised paper.\\n\\n\\n\\n***(R2.Q4) \\u201cwhy not learn the likelihood variance lambda?\\u201c\\n\\n\\nIn preliminary experiments we tried to optimize the lambda parameter during training. However, we found that at the early stages of learning, the model tended to assign a very small weight to the reconstruction loss and focus too much on the adversarial component of the loss. We solved this issue by fixing lambda.\"}",
"{\"title\": \"Response to AnonReviewer2 [1/2]\", \"comment\": \"We would like to thank the reviewer for his useful comments and suggestions. Detailed comments about specific concerns are addressed below.\\n\\n\\n\\n***(R2.Q1) Comparison with [Mathieu et. al, 2016]\\n\\nWe would like to clarify that the type of supervision used in [Mathieu et. al, 2016] is not equivalent to the one assumed in our problem. As discussed in R3.Q8, \\u201cthe reference-based setting is different from the scenario where information about samples sharing the same target factors is available. In particular, in our case we only know that reference images share the same label. In contrast, for the unlabelled distribution we do not have access to this information\\u201d. Note that this fact renders the original learning algorithm proposed in [Mathieu et. al, 2016] inapplicable in our context. The reason is that for any unlabelled image, we should be able to sample another image with the same generative factor (e.g. the same expression), given that we need to reconstruct them by swapping their latent representation e. Intuitively, this shows that reference-based disentangling is a more challenging problem than the one addressed in [Mathieu et. al, 2016]. The reason is that the amount of available supervision is lower (following the original paper nomenclature, only one type of \\u201cid\\u201d is labelled).\\n\\nBeyond the discussed difference, we agree with the reviewer that it is interesting to evaluate how the approach presented in [Mathieu et. al, 2016] behaves if only a single id is available (i.e, the reference label). For this reason, we have implemented this method using the same network architectures as in the rest of our models. Following the experimental evaluation described in Sec. 5.3, we have trained it by using the procedure suggested by the reviewer (i.e, only pairs of reference images are used during training and no labels for unlabelled images are assumed). Note that this implies a modification over the original learning algorithm. We have added the results in Table 1. As can be seen, with the method of [Mathieu et. al, 2016] we obtain reasonable results in the AffectNet dataset. However, sRBD-VAE achieves better average accuracy. On the other hand, in the MNIST dataset, using the approach of [Mathieu et. al, 2016] obtains poor performance compared to most methods. These results confirm that our approach is better suited to exploit the weak-supervision provided by the reference set of images. We have added this evaluation into the revised paper (see baselines in Sec 5.2 and discussion of the results in Sec 5.3)\\n\\nIt is also worth mentioning that an advantage of our model compared to [Mathieu et. al, 2016] is that we are able to naturally address conditional image generation (Sec 5.4) by sampling latent variables from p(e). Note that in [Mathieu et. al] this is not possible given that no prior over the target factors e is forced. \\n\\n\\n\\n***(R2.Q2) \\u201cmissing reference - Bouchacourt\\u201d\\n\\nWe have added the reference in Related Work section.\\n\\n\\n\\n***(R2.Q3) \\u201c ...there have been more recent approaches since betaVAE and DIP-VAE (e.g. FactorVAE (Kim et al) TCVAE (Chen et al)). It would be nice to compare against these methods, not only via predictive accuracy of target factors but also using disentangling metrics specified in these papers\\u2026\\u201d\\n\\nWe thank the reviewer for pointing out these recent unsupervised methods. We have added these references to the updated version of the paper. For the sake of completeness of our evaluation, we have implemented the method proposed by [Chen et. al,2018] (note that TCVAE and FactorVAE minimize the same objective function) and run the same experiments described in Sec. 5.3 of our paper. We have added the quantitative results in Table 1 of the updated paper. Note that TCVAE obtain a similar average performance compared to other unsupervised approaches like bVAE or DIP-VAE. Moreover, the average results obtained by our method in both datasets are better. Therefore, our conclusions in this experiment remain unchanged.\\n\\nOn the other hand, we would like to clarify that the metrics proposed in [Chen et. al, 2018, Kim et. al, 2018] are specifically designed for evaluating how a single dimension of the latent representation corresponds to a single ground-truth generative factor. As we have discussed in our response to Reviewer 1 (see Q1.R1), this is not the goal of our work and we believe that the one-to-one mapping assumption is not realistic when modelling high-level generative factors. For example, it is not reasonable to expect that a single dimension of the latent vector e can convey all the information about a complex generative factor such as the facial expression. Therefore, we think that the metrics proposed in the cited works are not appropriate in our context.\"}",
"{\"title\": \"Response to AnonReviewer3 [2/2]\", \"comment\": \"***(R3.Q8) \\u201c...a fairer baseline could consider learning with the weak supervision labels (containing the information that some images have the same label)...\\u201d\\n\\nWe would like to emphasize that the reference-based setting is different from the scenario where information about samples sharing the same target factors is available. In particular, note that in our case, we only know that reference samples share the same label. In contrast, for the unlabelled distribution we do not have access to this information (e.g what faces share the same expression). This is because our main goal is to avoid the explicit annotation of the factors of interest. As discussed in the related work, assuming supervision in terms of images sharing the same label have been previously considered in previous approaches [Mathieu et al., 2016; Donahue et al., 2018]. However, in \\u201creference-based disentangling\\u201d this type of supervision is not available during training and, to the best of our knowledge, no previous methods are able to naturally address this problem. Our comparison with unsupervised models is intended to show that our model is able to exploit the weak-supervision provided by the reference set and its advantages. However, as suggested by Reviewer 3 , we have evaluated the method presented in [Mathieu et. al, 2016] by adapting it to our \\u201creference-based\\u201d setting. Note that this method also can exploit the weak-labels provided in the reference-set. Please, see R2.Q1 for a detailed discussion.\"}",
"{\"title\": \"Response to AnonReviewer3 [1/2]\", \"comment\": \"We thank the reviewer for his detailed feedback. Following, we address his concerns.\\n\\n\\n***(R3.Q1) \\u201cI don\\u2019t really see how Equation (5) in symmetric KL prevents learning redundant z (i.e. z contains all information of e)\\u201d...\\u201dTo ensure that z does not contain information about e, one could add an adversarial predictor that tries to predict e from z\\u201d\\n\\nWe thank the reviewer for arising this question. We didn\\u2019t think about this potential \\u201cdegenerate\\u201d solution before (was not observed in our experiments) and we have concluded that our model naturally avoids the case where redundant information from e is encoded into z. The rationale is as follows. Note that the classifier d(x,z,e) is trained in order to discriminate triplets {x,z,e} obtained from the distributions q(z,e|x)(x) and p(x|z,e)p(z)p(e). On the other hand, the model encoder and generator try to make these distributions as similar as possible. Consider the scenario where a latent z sampled from q(z|x) contains (redundant) information about a sample e generated from q(e|x). In this case, z and e would be conditionally dependent. In contrast, latent variables e and z generated by p(z)p(e) are independent (given that the priors are defined by an isotropic Gaussian distribution). Therefore, our model is penalized in this case since the the discriminator d(x,z,e) would easily differentiate between both distributions (by exploiting the dependency present in q(z,e|x) but not in p(z)p(e)). Interestingly, note that the \\u201cadversarial predictor\\u201d suggested by the reviewer is already implicitly implemented by the discriminator d(x,z,e). We have discussed this issue in the updated version (before last paragraph of Sec 4.3 \\u201cOptimization via Adversarial Learning\\u201d).\\n\\n\\n\\n***(R3.Q2) \\u201cI wonder if the ground truth transformation for MNIST can be simply described as in some linear transformation on the original image...\\u201c\", \"the_transformations_applied_to_the_mnist_datasets_are\": \"(i) Colorization, (ii) Modification of the stroke width and (iii) Resizing + zero-padding. Apart from (i), (ii) and (iii) are not linearly dependent on the transformation parameter.\\n\\n\\n\\n***(R3.Q3) \\u201cI wonder if the proposed method works on SVHN, where you can use label information as reference supervision...\\u201c\\n\\nNote that our main motivation is to learn a disentangled representation without explicit labelling of the underlying target factors. Using the SVHN as suggested (i.e, considering the digit labels) would imply that the factors of interest are annotated and, therefore, that full or semi-supervision is provided during training. We would like to emphasize that we are focused in the weakly-supervised setting, where explicit annotations are not needed in order to disentangle the target factors.\\n\\n\\n***(R3.Q4) \\u201c I wonder if it is possible to use multiple types of reference images ...\\u201c\\n\\nIf we understand correctly, in the described setting each reference set would contain images with a specific set of \\u201cconstant\\u201d factors . In this case, we think that our model could easily adress the suggested scenario by splitting the latent variables into more than two subsets and using different discriminators for each reference distribution.\\n\\nIf the reviewer has a concrete idea about a potential scenario where this setting is interesting, we would be grateful to know it. We are currently exploring potential extensions and applications of our proposed model for future work.\\n\\n\\n\\n***(R3.Q5) \\u201c Why assume that the reference distribution is delta distribution whose support has measure zero, instead of a regular Gaussian?\\u201c\\n\\nUsing a \\u201cdelta shaped\\u201d prior over latents e allows us to model the assumption that variation factors are constant across reference images. In contrast, note that for unlabelled images the prior p(e) is indeed modelled as a regular Gaussian.\\n\\n\\n\\n***(R3.Q6) \\u201c(6), (8), (10) seems over complicated due to the semi-supervised nature of the objective. I wonder if having an additional figure would make things clearer...\\u201c\\n\\nThank you for the suggestion. We have added Fig. 5 in the Appendix B in order to clarify the formulation and illustrate the training process.\\n\\n\\n\\n***(R3.Q7) \\u201cMaybe it is helpful to cite the ALICE paper (Li et al) for Equation (10). Table 1, maybe add the word \\u201crespectively\\u201d so it is clearer which metric you use for which dataset...\\u201c\\n\\nSuggested changes are added in the updated version of the paper.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/2]\", \"comment\": \"We would like to thank the reviewer for his useful comments and remarks. Detailed discussion about his specific concerns are addressed below.\\n\\n\\n***(R1.Q0) Clarification about quantitative results\\n\\nFirst of all, we would like to clarify that our experiments also include quantitative evaluation over the AffectNet. From the reviewer's description in the \\u201cPros\\u201d section, it could be interpreted that we only provide quantitative results over this dataset. \\n\\n\\n\\n***(R1.Q1) - Practical applications of the learning setting : \\u201c..The problem that this work solves seems somewhat artificial...\\u201d\\n\\nWe strongly believe that addressing the introduced problem can be useful in different scenarios. One of the motivation of our experiments on the AffectNet was to show a concrete advantage of this type of supervision in a practical case. Note that in facial behavior analysis/synthesis large-scale datasets are typically very hard to annotate. The reason is that facial gestures depend on a combination of a large number of facial muscle activations and their corresponding intensities (i.e. Action Units) [Ekman, 1997]. Therefore, fine-grained annotation of facial gestures is very tedious and require expert coders. By contrast, collecting a large data set of neutral faces is much easier and can be carried out by non-expert annotators. Another interesting application that we plan to explore in future work is \\u201cweakly-supervised\\u201d artistic-style disentangling. In this case, we will consider the unlabelled dataset to be a collection of paintings (containing a large-number of styles that do not need to be labelled). On the other hand, we will consider the reference samples as images with a \\u201cconstant\\u201d style (real photographs). Note that in this case, the reference dataset would be almost free to collect. By training our model on this data, we would be able to learn a latent representation of the painting styles with no supervision and manipulate it in order to transfer styles, interpolate them or synthetically generate new ones. Following the same idea, another potential application where the reference-based supervision could be useful is automatic colorization of grayscale photographs. In this case, multiple colorizations for the same picture could be synthetized by injecting random noise into the latent variable e. Note that in this scenario, the reference images would be obtained by removing the color of natural images (forming the unlabelled set) . Again, in this application the reference-set would be very easy to collect.\"}",
"{\"title\": \"General response to the reviewers\", \"comment\": \"We thank all the reviewers for their constructive feedback which, honestly, is being very useful in order to improve the quality of our work. We have uploaded a revised version of our paper where we have incorporated additional material and suggestions. The main changes are summarised as follows:\\n\\n***Sec. 1:\\n-Added discussion about the advantages of reference-based supervision in facial expression analysis/synthesis (AnonReviewer1)\\n- Rephrased definition of disentangled representations. (AnonReviewer1)\\n***Sec. 4\\n- Added discussion about the need of using discriminators in our model (AnonReviewer2 & AnonReviewer3)\\n***Experiments\\n- Added comparison and discussion with [Mathieu, et al,2016] and [Chen,et al 2018] (AnonReviewer2)\\n***Appendix\\n- Figure added to clarify the model and training procedure (AnonReviewer3)\\n***Added suggested references by the reviewers and other minor comments\\n\\n\\nMore detailed discussion about these and other issues is provided to each reviewer independently. We hope that this helps to address the reviewer\\u2019s concerns and, if considered, raise their final scores. Please, do not hesitate to ask for more clarifications if needed.\"}",
"{\"title\": \"Interesting approach, somewhat artificial setup, limited interpretation of \\\"disentangling representation learning\\\"\", \"review\": \"The authors address the problem of representation learning in which data-generative factors of variation are separated, or disentangled, from each other. Pointing out that unsupervised disentangling is hard despite recent breakthroughs, and that supervised disentangling needs a large number of carefully labeled data, they propose a \\u201cweakly supervised\\u201d approach that does not require explicit factor labels, but instead divides the training data in to two subsets. One set, the \\u201creference set\\u201d is known to the learning algorithm to leave a set of generative \\u201ctarget factors\\u201d fixed at one specific value per factor, while the other set is known to the learning algorithm to vary across all generative factors. The problem setup posed by the authors is to separate the corresponding two sets of factors into two non-overlapping sets of latents.\", \"pros\": \"To address this problem, the authors propose an architecture that includes a reverse KL-term in the loss, and they show convincingly that this approach is indeed successful in separating the two sets of generative factors from each other. This is demonstrated in two different ways. First, quantitatively on an a modified MNIST dataset, showing that the information about the target factors is indeed (mostly) in the set of latents that are meant to capture them. Second, qualitatively on the modified MNIST and on a further dataset, AffectNet, which has been carefully curated by the authors to improve the quality of the reference set. The qualitative results are impressive and show that this approach can be used to transfer the target factors from one image, onto another image.\\n\\nTechnically, this work combines and extends a set of interesting techniques into a novel framework, applied to a new way of disentangling two sets of factors of variation with a VAE approach.\", \"cons\": \"The problem that this work solves seems somewhat artificial, and the training data, while less burdensome than having explicit labels, is still difficult to obtain in practice. More importantly, though, both the title and the start of the both the abstract and the introduction are somewhat misleading. That\\u2019s because this work does not actually address disentangling in the sense of \\u201cLearning disentangled representations from visual data, where high-level generative factors correspond to independent dimensions of feature vectors\\u2026\\u201d What it really addresses is separating two sets of factors into different parts of the representation, within each of which the factors can be, are very likely are, entangled with each other.\\n\\nRelated to the point that this work is not really about disentangling, the quantitative comparisons with completely unsupervised baselines are not really that meaningful, at least not in terms of what this work sets out to do. All it shows is whether information about the target factors is easily (linearly) decodable from the latents, which, while related to disentangling, says little about the quality of it. On the positive side, this kind of quantitative comparison (where the authors approach has to show that the information exists in the correct part of the space) is not pitted unfairly against the unsupervised baselines.\\n\\n===\", \"update\": \"The authors have made a good effort to address the concerns raised, and I believe the paper should be accepted in its current form. I have increased my rating from 6 to 7, accordingly.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper proposes reference based VAEs, which considers learning semantically meaningful feature with weak supervision. The latent variable contains two parts, one related to the reference set and the other irrelevant. To prevent degenerate solutions, the paper proposed to use reverse KL resulting in a ALICE-style objective. The paper demonstrates interesting empirical results on feature prediction, conditional image generation and image synthesis.\\n\\nI don\\u2019t really see how Equation (5) in symmetric KL prevents learning redundant z (i.e. z contains all information of e). It seems one could have both KL terms near zero but also have p(x|z, e) = p(x|z)? One scenario would be the case where z contains all the information about e (which learns the reference latent features), so we have redundant information in z. In this case, the learned features e are informative but the decoder does not use e anyways. To ensure that z does not contain information about e, one could add an adversarial predictor that tries to predict e from z. Note that this cannot be detected by the feature learning metric because it ignores z for RbVAE during training.\\n\\nThe experiments on conditional image generation look interesting, but I wonder if the ground truth transformation for MNIST can be simply described as in some linear transformation on the original image. I wonder if the proposed method works on SVHN, where you can use label information as reference supervision. Moreover, I wonder if it is possible to use multiple types of reference images, but fewer images in each type, to reach comparable or even better performance.\", \"minor_points\": [\"Why assume that the reference distribution is delta distribution whose support has measure zero, instead of a regular Gaussian?\", \"(6), (8), (10) seems over complicated due to the semi-supervised nature of the objective. I wonder if having an additional figure would make things clearer.\", \"Maybe it is helpful to cite the ALICE paper (Li et al) for Equation (10).\", \"Table 1, maybe add the word \\u201crespectively\\u201d so it is clearer which metric you use for which dataset.\", \"I wonder if it is fair enough to compare feature prediction with VAE and other models since they do not use any \\u201cweak supervision\\u201d; a fairer baseline could consider learning with the weak supervision labels (containing the information that some images have the same label). The improvement on AffectNet compared to regular VAE does not look amazing given the additional weak supervision.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Results are promising, but missing comparison to an established method. And the loss seems more complicated than it need be.\", \"review\": \"Summary: Given two sets of data, where one is unlabelled and the other is a reference data set with a particular factor of variation that is fixed, the approach disentangles this factor of variation from the others. The approach uses a VAE whose latents are split into e that represents the factor of variation and z that represents the remaining factors. A symmetric KL loss that is approximated using the density-ratio trick is optimised for the learning, and the method is applied to MNIST digit style disentangling and AffectNet facial expression disentangling.\", \"pros\": [\"Clearly written\", \"Results look promising, both quantitative and qualitative.\"], \"cons\": [\"Mathieu et al disentangle a specific factor from others without explicit labels but by drawing two images with the same value of the specified factor (i.e. drawing from the reference set) and also drawing a third image with a any value of the specified factor (i.e. drawing from the unlabelled set). Hence their approach is directly applicable to the problem at hand in the paper. Although Mathieu et al use digit/face identity as the shared factor, their method is directly applicable to the case where the shared factor is digit style/facial expression. Hence it appears to me that it should be compared against.\", \"missing reference - Bouchacourt - explicit labels aren\\u2019t given and data is grouped where each group shares a factor of var. But here the data is assumed to be partitioned into groups, so there is no equivalent to the unlablled set, hence difficult to compare against for the outlined tasks.\", \"Regarding comparison against unsupervised disentangling methods, there have been more recent approaches since betaVAE and DIP-VAE (e.g. FactorVAE (Kim et al) TCVAE (Chen et al)). It would be nice to compare against these methods, not only via predictive accuracy of target factors but also using disentangling metrics specified in these papers.\", \"Other Qs/comments\", \"the KL terms in (5) are intractable due to the densities p^u(x) and p^r(x), hence two separate discriminators need to be used to approximate two separate density ratios, making the model rather large and complicated with many moving parts. What would happen if these KL terms in (5) are dropped and one simply uses SGVB to optimise the resulting loss without the need for discriminators? Usually discriminators tend to heavily underestimate density ratios (See e.g. Rosca et al), especially densities defined on high dimensions, so it might be best to avoid them whenever possible. The requirement of adding reconstruction terms to the loss in (10) is perhaps evidence of this, because these reconstruction terms are already present in the loss (3) & (5) that the discriminator should be approximating. So the necessity of extra regularisation of these reconstruction terms suggests that the discriminator is giving poor estimates of them. The reconstruction terms for z,e in (5) appear sufficient to force the model to use e (which is the motivation given in the paper for using the symmetric KL), akin to how InfoGAN forces the model to use the latents, so the necessity of the KL terms in (5) is questionable and appears to need further justification and/or ablation studies.\", \"(minor) why not learn the likelihood variance lambda?\", \"************* Revision *************\", \"I am convinced by the rebuttal of the authors, hence have modified my score accordingly.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByMVTsR5KQ | Adversarial Audio Synthesis | [
"Chris Donahue",
"Julian McAuley",
"Miller Puckette"
] | Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising. | [
"audio",
"waveform",
"spectrogram",
"GAN",
"adversarial",
"WaveGAN",
"SpecGAN"
] | https://openreview.net/pdf?id=ByMVTsR5KQ | https://openreview.net/forum?id=ByMVTsR5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyew49NXxN",
"r1x7zyjOam",
"ByxwkkouTX",
"r1lL9-zDTX",
"Skgki1VfT7",
"BJxnO07zTQ",
"r1gFfA7GaX",
"SJeGDaQMaQ",
"HkeNkp5xTQ",
"S1lL5vpRhm",
"HJgpLMhn37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544927790742,
1542135563319,
1542135518960,
1542033805960,
1541713815095,
1541713523766,
1541713425476,
1541713242334,
1541610715524,
1541490574017,
1541354068663
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper796/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/Authors"
],
[
"ICLR.cc/2019/Conference/Paper796/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper796/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper796/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a GAN model to synthesize raw-waveform audio by adapting the popular DC-GAN architecture to handle audio signals. Experimental results are reported on several datasets, including speech and instruments.\\n\\nUnfortunately this paper received two low-quality reviews, with little signal. The only substantial review was mildly positive, highlighting the clarity, accessibility and reproducibility of the work, and expressing concerns about the relative lack of novelty. The AC shares this assessment. The paper claims to be the first successful GAN application operating directly on wave-forms. Whereas this is certainly an important contribution, it is less clear to the AC whether this contribution belongs to a venue such as ICLR, as opposed to ICASSP or Ismir. This is a borderline paper, and the decision is ultimately relative to other submissions with similar scores. In this context, given the mainstream popularity of GANs for image modeling, the AC feels this paper can help spark significant further research in adversarial training for audio modeling, and therefore recommends acceptance. I also encourage the authors to address the issues raised by R1.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting application of GANs to audio, may spark further research.\"}",
"{\"title\": \"Author Response to Reviewer #3 (2/2)\", \"comment\": \"\\u201c\\u201c\\u201cPhase shuffle increases Inception scores substantially (4.12->4.67) in WaveGAN, but deteriorate Inception score in SpecGAN. And there is no discussion about this.\\u201d\\u201d\\u201d\\n\\nApplied to spectrograms, phase shuffle is a radically different operation than it is for waveform because spectrograms have more compact temporal axes, and we fill in jittered samples with padding. This means that, in the worst case, a SpecGAN discriminator with minimum phase shuffle (n=1) may be observing 468ms (nearly half the example) of padded waveform. On the other hand, a WaveGAN discriminator with n=1 observes a worst case of 83ms of padded waveform.\", \"we_have_added_a_sentence_to_our_paper\": \"\\u201cPhase shuffle decreased the inception score of SpecGAN, possibly because the operation has an exaggerated effect when applied to the compact temporal axis of spectrograms.\\u201d\"}",
"{\"title\": \"Author Response to Reviewer #3 (1/2)\", \"comment\": \"Thank you for elaborating. We still do not understand what specifically caused you to *change* your initial score of 6 to a 5. Respectfully, these criticisms appear to be post-hoc justification for your updated score. Nevertheless, we will address your concerns below, and have updated the paper with minor revisions based on your feedback:\\n\\n** Clarifying contributions **\\n\\n\\u201c\\u201c\\u201cIf as said in the response, the concrete methodological contributions are phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). At first reading of the paper, these contributions are not clear. These statements are not presented until Section 3.3 and Section 5 (EXPERIMENTAL PROTOCOL). In the Abstract and Introduction, it is said that the barrier to success application of GANs to audio generation is the non-invertible spectral representation. \\u201d\\u201d\\u201d\\n\\nWhile we outlined additional methodological contributions (e.g. phase shuffle) in response to your initial review, our *primary* contribution is still a GAN that operates on raw audio waveforms. Before this paper, the ability of GANs to generate one dimensional time series data had not been demonstrated.\\n\\n** WaveGAN vs SpecGAN **\\n\\n\\u201c\\u201c\\u201cThe non-invertible issue is not a issue.\\u201d\\u201d\\u201d\\n\\nSimply put, the non-invertibility of SpecGAN *is* an issue. If you naively apply image GANs to audio generation (by operating on spectrograms i.e. SpecGAN), the non-invertibility of the spectrograms is a major barrier to downstream usability because the resultant audio quality is atrocious (see links below). While humans are able to label digits generated by SpecGAN with higher accuracy than those generated by WaveGAN, the human-assessed subjective sound quality of SpecGAN is worse (and a simple listening test confirms this).\\n\\nBy operating directly on waveforms, our WaveGAN method achieves higher audio quality, is simpler to implement, and is a first for generative modeling of audio using GANs.\\n\\nWaveGAN (recognizable and better audio quality): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/wavegan_sc09.wav\\nSpecGAN with *approximate* inversion (recognizable but poor audio quality): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/specgan_sc09.wav\\n\\n** Presentation of qualitative ratings **\\n\\n\\u201c\\u201c\\u201cAround 60% accuracy for generated data is not a strong evidence that WaveGAN/SpecGAN as presented in this paper is promising. Thinking about generating digit images 0-9 by training GANs in MNIST. The labeling accuracy for generated images would be much higher than 60%.\\u201d\\u201d\\u201d\\n\\nIt is unfair to compare our results to the hypothetical human labeling performance of digits generated by a GAN trained on MNIST. While MNIST may have the same number of semantics modes (10) as our SC09 digit dataset, these datasets are quite different in terms of dimensionality. Images in MNIST can be seen as vectors in 784-dimensional (28x28) space, whereas waveforms in SC09 are vectors in 16000-dimensional space. Higher dimensionality does not necessarily equate to greater difficulty to generative modeling, but it certainly should discourage direct comparison.\\n\\nWe argue that our results are indeed \\u201cpromising\\u201d. We developed and compared multiple methods for generating audio waveforms with GANs, a first for the field. Our results are analogous to early papers in image generation with GANs (e.g. DCGAN from ICLR 2016), and such results laid groundwork for remarkable breakthroughs in high-resolution image synthesis.\\n\\n** Clarifying inception scores **\\n\\n\\u201c\\u201c\\u201cIf WaveGAN is what the paper introduces to overcome the non-invertible issue, it is confusing to see that SpecGAN outperforms WaveGAN by a large margin in Inception score. And I think, 58% accuracy for WaveGAN vs 66% for SpecGAN cannot be said to be similar. The non-invertible issue is not a issue.\\u201d\\u201d\\u201d\\n\\nWhile many GAN papers use inception score as a primary evaluation metric, we state that our intention is to use human evaluations as our primary metric: \\u201cWhile inception score is a useful metric for hyperparameter validation, our ultimate goal is to produce examples that are intelligible to humans. To this end, we measure the ability of human annotators...\\u201d\\n\\nIn our updated manuscript, we additionally hypothesize a reason behind the discrepancy between inception scores and subjective quality assessments: \\u201cthis discrepancy [between inception score and human assessments of quality] can likely be attributed to the fact that inception scores are computed on spectrograms while subjective quality assessments are made by humans listening to waveforms.\\u201d\\n\\nFurthermore, while the focus of our paper is on WaveGAN as it is a non-trivial application of GANs to audio generation, we acknowledge that spectrogram-based methods also achieve reasonable results for our task despite audio quality issues: \\u201cWe see promise in both waveform and spectrogram audio generation with GANs; our study does not suggest a decisive winner.\\u201d\"}",
"{\"title\": \"More explanations on my scoring\", \"comment\": \"Thanks for your clarifications. Here are my thoughts on modifying the score.\\n\\n\\u201c\\u201c\\u201c the algorithmic contribution is limited. \\u201d\\u201d\\u201d\\n\\nIf as said in the response, the concrete methodological contributions are phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). At first reading of the paper, these contributions are not clear. These statements are not presented until Section 3.3 and Section 5 (EXPERIMENTAL PROTOCOL). In the Abstract and Introduction, it is said that the barrier to success application of GANs to audio generation is the non-invertible spectral representation. \\n\\nIf WaveGAN is what the paper introduces to overcome the non-invertible issue, it is confusing to see that SpecGAN outperforms WaveGAN by a large margin in Inception score. And I think, 58% accuracy for WaveGAN vs 66% for SpecGAN cannot be said to be similar. The non-invertible issue is not a issue.\\n\\nPhase shuffle increases Inception scores substantially (4.12->4.67) in WaveGAN, but deteriorate Inception score in SpecGAN. And there is no discussion about this.\\n\\nIt is appreciated that the paper presents a nice effort to apply GANs to audio generation. But the presentation should be improved to make clearer the concrete methodological contributions and to present more consistent results.\\n\\n\\u201c\\u201c\\u201c Qualitative ratings are poor. \\u201d\\u201d\\u201d\\n\\nAround 60% accuracy for generated data is not a strong evidence that WaveGAN/SpecGAN as presented in this paper is promising. Thinking about generating digit images 0-9 by training GANs in MNIST. The labeling accuracy for generated images would be much higher than 60%. GAN based audio synthesis is interesting and should be promising. But the results shown in this paper does not fully validate this.\\n\\n\\n=========== comments after reading response ===========\\n\\nThe reviewer would like to thank the authors for their response, which clarifies some unclear issues. However, the response does not address my main concern about the algorithmic contribution of the proposed method. \\n\\n> While we outlined additional methodological contributions (e.g. phase shuffle) in response to your initial review, our *primary* contribution is still a GAN that operates on raw audio waveforms. Before this paper, the ability of GANs to generate one dimensional time series data had not been demonstrated.\\n\\nThis seems to be a overstate. Pascual et al. (2017) (SEGAN) has already shown the ability of GANs to conditionally generate one dimensional time series data. Instead of simply saying that \\\"Pascual et al. (2017) apply GANs to raw audio speech enhancement.\\\", it would be better to provide more relevant comparisons, inform the readers that the difference between Pascual et al. (2017) and this paper is conditional generation vs unconditional generation, and clarifies the difficulty in unconditional generation.\\n\\nThe paper consists of interesting efforts and contributions. I would like to suggest the authors to move the contributions of phase shuffle (Section 3.3) and the learned post processing filters (Appendix B) to the foreground. This presentation problem make me hold the scoring.\"}",
"{\"title\": \"Author Response to Reviewer #1\", \"comment\": \"Thank you for your thoughtful comments and suggestions. We will respond to each of your points below.\\n\\n** Explicit mention of methodological limitations **\\n\\nWe have updated the abstract and introduction to clarify that our model produces fixed-length results. We added the following sentence to our abstract \\u201cWaveGAN is capable of synthesizing one second audio waveforms with temporal coherence, suitable for sound effect generation.\\u201d We also added a similar clarification to paragraph 5 of the introduction (specifying that the generated waveforms are one second in length).\\n\\n** Justification for spectrogram pre-processing **\\n\\n\\nWe added justification for our spectrogram preprocessing to the last paragraph of page 4.\\n\\n** Discussion of existing methods (e.g. WaveNet) **\\n\\n\\u201c\\u201c\\u201cThe paper dismisses existing generative methods early in the evaluation phase \\u2026 it would have been beneficial to discuss and understand the failures of existing methods in more detail to convince the reader that a fair attempt has been made to getting competitors to work before leaving them out entirely\\u201d\\u201d\\u201d\\n\\nWe had originally included some of these details in our paper but they were cut for brevity. We agree that we cut too much, and have added details back into the paper in the form of a new Appendix section (Appendix C) with a pointer from the main paper. A summary follows:\\n\\nHow autoregressive waveform models (e.g. WaveNet) factor into the story and evaluation of our paper is a tricky subject, and one that we tried to handle thoughtfully. First and foremost: *the two public implementations of WaveNet that we tried simply failed to produce reasonable results* (sound examples can be heard at the bottom of http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com ). We did informally pre-screen these results ourself (and you can as well) and concluded that they were clearly noncompetitive. We also calculated Inception scores for these experiments: they were 1.067 +- 0.045 and 1.293 +- 0.027 respectively.\\n\\nWe reasoned that including these (poor) numbers in our results table would send the wrong message to readers. Namely, it would appear that we are claiming our methods works better than WaveNet. *This is NOT a claim that we are attempting to make*, as WaveNet was developed for a different problem (text-to-speech) than the one we are focusing on (learning the semantics of short audio clips). WaveNet additionally has no concept of a latent space, which would not allow for the same steerable exploration of sound effects that our model aspires to achieve (outlined in the introduction). Furthermore, we expect that the proprietary implementation of WaveNet would produce something more reasonable for our spoken digits task, but unfortunately we do not have access to it.\\n\\n** User study clarification **\\n\\n\\u201c\\u201c\\u201c It is unclear to me how many people annotated the individual samples? \\u201d\\u201d\\u201d\\n\\nWe have 300 examples of each digit, resulting in 3000 total labeling problems (name the digit 1-10). We give these to 300 annotators in random batches of 10 examples, and ask for qualitative assessments at the end of each batch. Accordingly, we have 300 responses to each qualitative metric (quality, easy, diversity). Standard deviations for MOS scores are around 1 for each category, resulting in small standard errors (~0.06) for n=300. We have added the standard deviations to our paper table and updated the text in Section 6.3 to clarify these details.\\n\\n\\u201c\\u201c\\u201c Consider including a reflection on (or perhaps even test statistically) the alignment between the qualitative diversity/quality scores and the subjective ratings to justify the use of the objective scores in the training/selection process \\u201d\\u201d\\u201d\\n\\nThe evaluation of generative models is a fraught topic and the lack of correlation between quantitative and qualitative metrics is known (see \\u201cA note on the evaluation of generative models\\u201d Theis et al. ICLR 2016). In the scope of our work, we do not have enough data points (only three for the expensive Mechanical Turk evaluations) to reach substantive conclusions about the correlation between e.g. Inception score and mean opinion scores for quality.\\n\\nWe hypothesize that the discrepancy between our quantitative metrics (Inception score, nearest neighbor comparisons) and subjective metrics (MOS scores) is due to the fact that the former are computed from spectrograms while the latter are from humans listening to waveforms. Unfortunately, evaluation of Inception score in the waveform domain was impractical as we were unable to train a waveform domain classifier that achieved reasonable accuracy on this classification task (note that we mention in our abstract that audio classifiers usually operate on spectrograms). However, we have updated our discussion to clarify: \\u201cThis discrepancy can likely be attributed to the fact that inception scores are computed on spectrograms while subjective quality assessments are made by humans listening to waveforms.\\u201d\"}",
"{\"title\": \"Author Response to Reviewer #2\", \"comment\": \"Thank you for highlighting that our experimental results are promising. As you mentioned, we state in our paper that \\u201cthough our evaluation focuses on a speech generation task, we note that it is not our goal to develop a text-to-speech synthesizer.\\u201d *We are primarily targeting generation of novel sound effects as our task.* We think this is an important task with immediate application to creative domains (e.g. music production, film scoring) and is orthogonal to the task of synthesizing realistic speech from transcripts. Our model is already capable of producing convincing results on this task for several different sound domains. Furthermore, whereas the goals for text to speech are to synthesize a given transcript, we are providing a method which enables user-driven content generation through exploration of a compact latent space of sound effects.\\n\\nOur purpose for focusing our evaluation on a speech generation task is to enable straightforward annotating for humans on Mechanical Turk. From the paper: \\u201cWhile our objective is sound effect generation (e.g. generating drum sounds), human evaluation for these tasks would require expert listeners. Therefore, we also consider a speech benchmark, facilitating straightforward assessment by human annotators.\\u201d\"}",
"{\"title\": \"Author Response to Reviewer #3\", \"comment\": \"Thank you for your feedback. We appreciate that you found our application to be interesting. We will address your criticisms in order.\\n\\nWe noticed you changed your score from a 6 to a 5 without updating the text of your review. We would be happy to address your concerns if you can provide additional context as to the reasoning behind your rating change.\\n\\n\\u201c\\u201c\\u201c the algorithmic contribution is limited. \\u201d\\u201d\\u201d\\n\\nWe would like to reiterate that our paper is the first to apply GANs to audio generation which is not as straightforward as simply adapting existing models. Specifically, we believe we have made concrete methodological contributions such as phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). In particular, phase shuffle was observed to increase Inception scores substantially (4.1->4.7), and, to our ears, made the difference between spoken digits that were intelligible and those that were unintelligible.\\n\\nSpoken digits from WaveGAN **with** phase shuffle (more intelligible): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/quant_wavegan_ps2.wav\\nSpoken digits from WaveGAN **without** phase shuffle (less intelligible): http://iclr-wavegan-results.s3-website-us-east-1.amazonaws.com/quant_wavegan.wav\\n\\n\\u201c\\u201c\\u201c Qualitative ratings are poor. \\u201d\\u201d\\u201d\\n\\nAs our task seeks to evaluate how well GANs can capture the semantic modes (vocabulary words in this case) of the training data, the primary qualitative metric to pay attention to should be the labeling accuracy. We believe our results of around 60% accuracy for generated data show that our approach is promising (note that random chance would be 10%).\\n\\nOn the subject of the qualitative ratings, our primary goal with this work is to provide a reasonable first pass at this problem, as well as define a task with clear and reproducible evaluation methodology to allow ourselves and others to iterate further. We believe our qualitative results are adequate, but note that improving these scores is a promising avenue for future work by integrating recent breakthroughs in image processing such as spectral normalization (Miyato et al. ICLR 2018) and progressive growth (Kerras et al. ICLR 2018).\\n\\n\\u201c\\u201c\\u201c The important problem of generating variable-length audio is untouched. \\u201d\\u201d\\u201d\\n\\nWe were the first to tackle fixed-length audio generation with GANs, a task which is already useful for application in several creative domains that we mention in the paper (music production, film scoring). We hope to build on our results in future work to address the challenging problem of generating variable-length audio.\"}",
"{\"title\": \"Author Response to Reviewers\", \"comment\": \"We would like to thank all of the reviewers for their thoughtful comments and suggestions. We have uploaded a new version of our manuscript with improvements based on reviewer feedback. Reviews were all positive for our paper (though one reviewer has since lowered their score without explanation), with reviewers highlighting the promising nature of our results as well as the clarity and reproducibility of our paper. We will respond to specific comments from each reviewer separately. If reviewers would like to provide additional context behind their scores we would be happy to provide feedback.\"}",
"{\"title\": \"Interesting application, limited algorithmic contribution\", \"review\": \"This paper applies GANs for unsupervised audio generation. Particularly, DCGAN-like models are applied for generating audio. This application is interesting, but the algorithmic contribution is limited.\\n \\nQualitative ratings are poor. The important problem of generating variable-length audio is untouched.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper proposes WaveGAN for unsupervised synthesis of raw-wave-form audio\", \"review\": \"This paper proposes WaveGAN for unsupervised synthesis of raw-wave-form audio and SpecGAN that based on spectrogram. Experimental results look promising.\\n\\nI still believe the goal should be developing a text-to-speech synthesizer, at least one aspect.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review - Adversarial Audio Synthesis\", \"review\": [\"*Pros:*\", \"Easily accessible paper with good illustrations and a mostly fair presentation of the results (see suggestions below).\", \"It is a first attempt to generate audio with GANs which results in an efficient scheme for generating short, fixed-length audio segments of reasonable (but not high) quality.\", \"Human evaluations (using crowdsourcing) provides empirical evidence that the approach has merit.\", \"The paper appears reproducible and comes with data and code.\", \"*Cons*:\", \"Potentially a missing comparison with existing generative methods (e.g. WaveNet). See comments/questions below **\", \"The underlying idea is relatively straightforward in that the proposed methods is a non-trivial application of already known techniques from ML and audio signal processing.\", \"*Significance*: The proposed GAN-based audio generator is an interesting step in the development of more efficient audio generation and it is of interest to a subcommunity of ICLR as it provides a number of concrete techniques for applying GANs to audio.\", \"*Further comments/ questions:*\", \"Abstract/introduction: I\\u2019d suggest being more explicit about the limitations of the method, i.e. you are currently able to generate short and fixed-length audio.\", \"SpecGAN (p 4): I\\u2019d suggest including some justification of the chosen pre-processing of spectrograms (p. 4, last paragraph).\", \"** Evaluation: The paper dismisses existing generative methods early in the evaluation phase but the justification for doing so is not entirely clear to me: Firstly, if the inception score is used as an objective criterion it would seem reasonable to include the values in the paper. Secondly, as inception scores are based on spectrograms it could potentially favour methods using spectrograms directly (SpecGAN) or indirectly (WaveGAN, via early stopping) thus putting the purely sample based methods (e.g. WaveNet) at a disadvantage. It would seem fair to pre-screen the audio before dismissing competitors instead of solely relying on potentially biased inception scores (which was probably also done in this work, but not clearly stated\\u2026)? Finally, while not the aim of the paper, it would have been beneficial to discuss and understand the failures of existing methods in more detail to convince the reader that a fair attempt has been made to getting competitors to work before leaving them out entirely.\", \"Results/analysis: It is unclear to me how many people annotated the individual samples? What is the standard deviation over the human responses (perhaps include in tab 1)? Consider including a reflection on (or perhaps even test statistically) the alignment between the qualitative diversity/quality scores and the subjective ratings to justify the use of the objective scores in the training/selection process.\", \"Related work: I think it would provide a better narrative if the existing techniques are outlined earlier on in the paper.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1xVTjCqKQ | A Data-Driven and Distributed Approach to Sparse Signal Representation and Recovery | [
"Ali Mousavi",
"Gautam Dasarathy",
"Richard G. Baraniuk"
] | In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery. First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them. Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems. In this paper, we address these two challenges by presenting a novel framework based on deep learning. For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood (ML) formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures. For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process. We demonstrate the significant improvement our method obtains over competing methods through a series of experiments. | [
"Sparsity",
"Compressive Sensing",
"Convolutional Network"
] | https://openreview.net/pdf?id=B1xVTjCqKQ | https://openreview.net/forum?id=B1xVTjCqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlTjLcglV",
"SygHVHWiyV",
"BkgimTEqRm",
"HklY22V9RQ",
"B1gQwnV9R7",
"Bylk_jNcC7",
"S1lGNoV90m",
"HkgjfHOahX",
"Byl6y31r37",
"SJgsRkfb3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544754853333,
1544389932603,
1543290146843,
1543290033018,
1543289947117,
1543289703334,
1543289641738,
1541403922648,
1540844516822,
1540591571489
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper795/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper795/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper795/Authors"
],
[
"ICLR.cc/2019/Conference/Paper795/Authors"
],
[
"ICLR.cc/2019/Conference/Paper795/Authors"
],
[
"ICLR.cc/2019/Conference/Paper795/Authors"
],
[
"ICLR.cc/2019/Conference/Paper795/Authors"
],
[
"ICLR.cc/2019/Conference/Paper795/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper795/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper795/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies deep convolutional architectures to perform compressive sensing of natural images, demonstrating improved empirical performance with an efficient pipeline.\\nReviewers reached a consensus that this is an interesting contribution that advances data-driven methods for compressed sensing, despite some doubts about the experimental setup and the scope of the theoretical insights. We thus recommend acceptance as poster.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Trainable Image Compressed Sensing with solid empirical results\"}",
"{\"title\": \"post rebuttal\", \"comment\": \"I think the authors have addressed all my comments and I recommend acceptance.\"}",
"{\"title\": \"Response to Comments\", \"comment\": \"One of the contributions in this paper is the speed, so the results on the speed should be put in the main paper.\", \"response\": \"We have added a new section in the Appendix to discuss the computational benefits of our approach. As shown in Table 3 of our updated manuscript, our method is significantly faster than both DAMP and LDAMP methods.\"}",
"{\"title\": \"Response to Cons mentioned by the reviewer\", \"comment\": \"I don't understand Fig 3a ...\", \"response\": \"First of all, there is an important difference between NuMax and the other algorithms in Figure 3a. In Algorithm 1 of the NuMax paper (http://home.engineering.iastate.edu/~chinmay/files/papers/numax_tsp.pdf), the parameter \\\\epsilon (which is called \\\\delta in that paper) is an input to the algorithm. Given a value for \\\\epsilon, NuMax determines the appropriate dimension of the embedding (i.e., M). However, for other approaches (random/DeepSSRR/DCN) we do not give an \\\\epsilon to the algorithm. Instead, we pick an embedding size (i.e., M), construct an embedding of that size, and then measure the \\\\epsilon. In other words, for NuMax, \\\\epsilon is the input and M is the output while for other methods, M is the input and \\\\epsilon is the output. In spite of this difference, the visualization in Figure 3a lets us compare different methods and understand which one gives us a better isometry constant.\"}",
"{\"title\": \"Response to Questions\", \"comment\": \"\", \"question\": \"The last part of the last sentence of the 2nd paragraph ...\", \"response\": \"Yes, it means that the DCN has more parameters compared to DeepSSRR. As we have mentioned in the 1st paragraph of Section 3.1, the DCN has 8 convolutional layers, while DeepSSRR has 5 to 7 convolutional layers, depending on the size of embedding.\"}",
"{\"title\": \"Response to Comments 1 and 2\", \"comment\": \"Re your specific comments:\\n\\n1- It is indeed possible to compare learning-based approaches to compressive sensing (like our work in this manuscript) vs. model-based approaches (like AMP, DAMP). We refer the reviewer to Figure 5(c) in our paper and also Figure 3 of the paper \\\"A Learning Approach to Compressed Sensing,\\\" http://cs231n.stanford.edu/reports/2017/pdfs/8.pdf. Figure 5 of our paper compares the performance of our learning-based approach vs. the LASSO L1 solver; Figure 3 of the aforementioned paper compares the performance of other learning-based approaches such as CNN and VAE with AMP. Both figures show that i) when the undersampling ratio (i.e. m/n) is small, learning-based approaches (like our work) can outperform model-based approaches (such as AMP or DAMP); ii) when the undersampling ratio is large enough, model-based approaches start to outperform learning-based approaches.\\n\\nIntuitively, when the undersampling ratio is large enough, model-based approaches can extract sufficient information from measurements to reconstruct signals accurately enough and even better than learning-based approaches.\\nMoreover, model-based algorithms like AMP/DAMP have the knowledge of the measurement matrix and this is another factor helping them to be better than learning-based approaches in high undersampling ratio regime. \\n\\nRegarding different SNRs, we refer the reviewer to Table 3 of [arXiv:1701.03891] which we have also cited in our submission. In that table, the authors compare the robustness of recovery based on CNNs and DAMP. As they have shown, CNNs are more robust to noise. In general, learning-based approaches can utilize data to more effectively suppress measurement noise. \\n\\nFinally, we note that the LDAMP approach we have cited in our paper is very similar to DAMP except that, instead of using a BM3D denoiser, LDAMP uses a CNN denoiser. The rest of the architecture is not learned and hence, is similar to AMP/DAMP. Therefore, one can expect that LDAMP's behaviour would be similar to DAMP except for the fact that it has a better denoiser.\\n\\n2- We have added a reference to the SRA paper in our revised paper plus added a short discussion of the differences with our approach. Like our approach, the SRA architecture is also an autoencoder. In SRA, the encoder can be considered to be a fully connected layer while in our work the encoder has a convolutional structure and is basically a circulant matrix. For large problems, learning a fully connected layer (as in the SRA encoder) is significantly more challenging than learning one/several convolutional layers (as in our encoder). In SRA, the decoder is a T-step projected subgradient. In our work, the decoder consists of several convolutional layers plus a rearrangement layer. The optimization in SRA is solely over the measurement matrix and T (which is the number of layers in the decoder) scalar values that could be considered as learning rates at every layer of the decoder. However, in our work, the optimization is over the convolution weights and biases that we have across the different layers of our encoder and decoder. The authors of SRA have shown results mainly on synthetic datasets whereas we have presented results on real images.\"}",
"{\"title\": \"Response to Comments 3 and 4\", \"comment\": \"3- We refer the reviewer to Section 2.2 of our submission (\\\"Applications of Low-Dimensional Embedding\\\"). In this section and in Algorithm 1, we discuss how we can learn near-isometric embeddings using our approach. One of the main applications of near-isometric embeddings is designing compressive sensing (CS) measurement matrices. In CS language, learning a near-isometric embedding is equivalent to learning a measurement matrix that satisfies the so-called restricted isometry property (RIP). RIP is a *sufficient* condition for compressive sensing. This means that the matrices we learn with Algorithm 1 can be used along with L1 minimization for CS.\\n\\nFor a comparison of our approach with previous work, we refer the reviewer to Figure 3(a) in our submission and also Figure 8 in the NuMax paper we cite (available at http://home.engineering.iastate.edu/~chinmay/files/papers/numax_tsp.pdf). Figure 8 of the NuMax paper compares the CS recovery performance of NuMax vs. random Gaussian projections and shows that NuMax outperforms random projections in terms of MSE for different measurement ranges and SNRs. This success is mainly explained by Figure 3 of the NuMax paper, which shows that the matrices built by the NuMax algorithm have a better isometry constant than random matrices. With this in mind, we now refer the reviewer to Figure 3(a) in our manuscript, where we have shown that the isometry constant of our method is even better than NuMax. This means that, if our approach is used with L1 reconstruction, then the result will be better than using either random matrices or NuMax matrices. Therefore, the answer to the reviewer's question is \\\"yes\\\". We can basically use matrices learned with our approach along with L1 reconstruction, and the result will beat both random projections and NuMax embeddings. \\n\\n4- We used a right to left ordering in Figure 1, because we wanted to include the vector-matrix multiplications denoted as 'parallel convolutions' in this figure.\"}",
"{\"title\": \"Review: An interesting approach to data-driven compressed sensing\", \"review\": \"This paper proposes a (CNNs) architecture for encoding and decoding images for compressed sensing. \\nIn standard compressed sensing (CS), encoding usually is linear and corresponds to multiplying by a fat matrix that is iid gaussian. The decoding is performed with a recovery algorithm that tries to explain the linear measurements but also promotes sparsity. Standard decoding algorithms include Lasso (i.e. l1 regularization and a MSE constraint) \\nor iterative algorithms that promote sparsity by construction. \\n\\nThis paper instead proposes a joint framework to learn a measurement matrix Phi and a decoder which is another CNN in a data-driven way. The proposed architecture is novel and interesting. \\n\\nI particularly liked the theoretical motivation of the used MSE loss by maximizing mutual information. \\n\\nThe use of parallel convolutions is also neat and can significantly accelerate inference, which can be useful for some applications. \\n\\nThe empirical performance is very good and matches or outperforms previous state of the art reconstruction algorithms D-AMP and Learned D-Amp. \\n\\nOn comparisons with prior/concurrent work: The paper is essentially a CNN autoencoder architecture but specifically designed for compressed sensing problems. \\nThere is vast literature on CNN autoencoders including (Jiang 2017 and Shi 2017) paper cited by the authors. I think it is fine to not compare against those since they divide the images into small blocks and hence have are a fundamentally different approach. This is fine even if block-reconstruction methods outperform this paper, in my opinion: new ideas should be allowed to be published even if they do not beat SOTA, as long as they have clearly novel ideas. It is important however to discuss these differences as the authors have done in page 2.\", \"specific_comments\": \"1. It would be interesting to see a comparison to D-Amp and LDAmp for different number of measurements or for different SNRs (i.e. when y = Phi x+ noise ). I suspect each method will be better for a different regime?\\n\\n2. The paper: `The Sparse Recovery Autoencoder' (SRA) by Wu et al. https://arxiv.org/abs/1806.10175\\nis related in that it learns both the sensing matrix and a decoder and is also focused on compressed sensing, but for non-image data. The authors should discuss the differences in architecture and training. \\n\\n3. Building on the SRA paper, it is possible that the learned Phi matrix is used but then reconstruction is done with l1-minimization. How does that perform for the matrices learned with DeepSSRR?\\n\\n4. Why is Figure 1 going from right to left?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting theory, shakey experiments\", \"review\": \"Quality & Clarity:\\nThis is a nice paper with clear explanations and justifications. The experiments seem a little shakey.\\n\\nOriginality & Significance:\\nI'm personally not familiar enough to say the theoretical work is original, but it is presented as so. However it seems significant. The numerical results do not seem extremely significant, but to be fair I'm not familiar with state of the art nearest neighbor results ie Fig 3.\", \"pros\": \"I like that you don't take much for granted. E.g. you justify using convolutional net in 2.1, and answered multiple of my questions before I could type them (e.g. why didn't you include nonlinearities between convolutions, why bother with cascaded convolutions, and what you mean by near-optimal).\", \"cons\": \"The visual comparisons in Figure 4 are difficult to see. DLAMP appears to be over-smoothing but in general it's hard to compare to low-ish resolution noisy-looking textures. I strongly recommend using a test image with a clear texture to illustrate your point (eg the famous natural test image that has on the side a tablecloth with zig-zag lines)\\n\\nThe horizontal error bars are obfuscated by the lines between markers in Fig 3a.\\n\\nI don't understand Fig 3a. You are varying M, which is on the Y-axis, and observing epsilon, on the X-axis?\", \"questions\": \"Can you state what is novel about the discussion in the \\\"Theoretical Insights\\\" subsection of 2.1? I guess this is described in your abstract as \\\"we cast the problem ... by using a maximum likelihood protocol...\\\" but your contribution could be made more explicit. For example \\\"We show that by jointly optimizing phi and lambda (sensing and recovery), we are maximizing the lower bound of mutual information between reconstructions (X) and samples (Y)\\\" (that is my understanding of the section)\\n\\nWhy don't you use the same M for all methods in the Figure 3 experiments? ie why did you use a different M for numax/random versus deepSSRR/DCN?\\n\\nWhy do you choose 20-layers for the denoiser? Seems deep...\\n\\nThe last part of the last sentence of the 2nd paragraph of section 3.1 should be a complete sentence \\\"though, with more number of parameters\\\". Does that mean that the DCN has more parameters than the DeepSSRR?\\n\\nI am willing to change score based on the response\\n\\n******************\", \"update_after_author_response\": \"Thanks for the clear response and Figure 3, and nice paper. My score is updated.\", \"ps\": \"I still think that the (tiny) error bars are obfuscated because the line connecting them is the same thickness and color.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A data-driven and distributed approach to sparse signal representation and recovery\", \"review\": \"Authors case the problem of finding informative measurements by using a maximum likelihood formulation and show how a data-driven dimensionality reduction protocol is built for sensing signals using convolutional architectures. A novel parallelization scheme is discussed and analyzed for speeding up the signal recovery process.\\n \\nPrevious works have been proposed to jointly learn the signal sensing and reconstruction algorithm using convolutional networks. Authors do not consider them as the baseline methods due to the fact that the blocky reconstruction approach is unrealistic such as MRI. However, there is no empirical result to support his conclusion. In addition, the comparisons to these methods can further convince the readers about the advantage of the proposed method.\\n \\nIt is not clear where the maximum deviation from isometry in Algorithm 1 is discussed since the MSE is used as a loss function.\\n \\nAuthors provided theoretical insights for the proposed algorithm. It indicates that the lower-bound of the mutual information is maximized and minimizing the mean squared error is a special case, but it is unclear why this can provide theoretical guarantee for the proposed method. More details are good for the connections between the theory and the proposed algorithm.\\n \\nOne of the contributions in this paper is the speed, so the results on the speed should be put in the main paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rylV6i09tX | Interpreting Adversarial Robustness: A View from Decision Surface in Input Space | [
"Fuxun Yu",
"Chenchen Liu",
"Yanzhi Wang",
"Xiang Chen"
] | One popular hypothesis of neural network generalization is that the flat local minima of loss surface in parameter space leads to good generalization. However, we demonstrate that loss surface in parameter space has no obvious relationship with generalization, especially under adversarial settings. Through visualizing decision surfaces in both parameter space and input space, we instead show that the geometry property of decision surface in input space correlates well with the adversarial robustness. We then propose an adversarial robustness indicator, which can evaluate a neural network's intrinsic robustness property without testing its accuracy under adversarial attacks. Guided by it, we further propose our robust training method. Without involving adversarial training, our method could enhance network's intrinsic adversarial robustness against various adversarial attacks. | [
"Adversarial examples",
"Robustness"
] | https://openreview.net/pdf?id=rylV6i09tX | https://openreview.net/forum?id=rylV6i09tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1epoBY4lV",
"H1glVZVlx4",
"S1gwEq9SkE",
"H1gFxIzSJN",
"S1xY1GzSk4",
"SkgNELFfRQ",
"rylpztufCm",
"SkxL3EDvaQ",
"S1licXsy6X",
"ryehFslyam",
"HJgm8ciA3X",
"SyxnXcj03m",
"HJejsv4An7",
"B1ecS56Jh7"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1545012645232,
1544728871753,
1544034862733,
1544001008895,
1543999969510,
1542784555950,
1542781204659,
1542055085853,
1541546898541,
1541503876295,
1541483083395,
1541483044315,
1541453731488,
1540508225745
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
],
[
"ICLR.cc/2019/Conference/Paper794/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper794/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Reply to reviewer3\", \"comment\": \"Thanks for the reviewer\\u2019s precious comments! We do believe that there is a lot to be improved in this paper!\\n1)\\tAbout the first concern of the reviewer, the reason of \\u201cequated generalization\\u201d is that previously upon the finish of this paper, there is no clear definition of the generalization in the adversarial settings. And our statement is actually saying that \\u201cadversarial robust generalization\\u201d doesn\\u2019t equal to \\u201cstandard generalization\\u201d, as in the NeurIPS\\u201918 tutorial slides, https://media.neurips.cc/Conferences/NIPS2018/Slides/adversarial_ml_slides_parts_1_4.pdf, page 29-30. We can now give the formal definition maybe in the future paper, thanks for the advice.\\n2)\\tFor the second concern, by briefly comparing weight/input space visualization, the main conclusion we want to draw is that, past generalization analysis cannot be well adopted in the adversarial settings. I think this should make more sense.\\n3)\\tAbout the sec.3, we don\\u2019t agree with the reviewer that the visualization insights are trivial because the visualization results are one of the key intuition, which is \\u201cThe geometry slopes (gradients) matter a lot in the model\\u2019s robustness\\u201d and this is the motivation of why regulating jacobian could improve the robustness in the whole paper. \\n4)\\tAbout the robustness indicator, we have shown a case study in the Sec4.3 ROBUSTNESS INDICATOR EVALUATION. We could compare two model\\u2019s Jacobian & Hessian to distinguish two model\\u2019s robustness easily, as shown in Fig.8. This is not done by grid search, but by backpropagation to compute Jacobian and Hessian, therefore not computationally expensive. \\n5)\\tAbout the Jacobian Regulation novelty, we do agree that the robust training part is not a significant contribution, as also mentioned by reviewer-2. Our reply about the difference is here: https://openreview.net/forum?id=rylV6i09tX¬eId=HJgm8ciA3X under the reviewer-2\\u2019s comments.\\n6)\\tAbout the related work and other minor comments, thanks again for the precious comments!\"}",
"{\"metareview\": \"This paper studies the relationship between flatness in parameter space and generalization. They show through visualization experiments on MNIST and CIFAR-10 that there is no obvious relationship between the two. However, the reviewers found the motivation for the visualization approach unconvincing and further found significant overlap between the proposed method and that of Ross & Doshi. Thus the paper should improve its framing, experimental insights and relation to prior work before being ready for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thanks a lot for your review!\\n\\n1. About the first concern, we understand your concerns that the cross-entropy loss surface hole might be caused by drawing the plots too high to see that the gradients' direction. We will try to lower the magnitude of the loss surface and visualize it again.\\nBut still, our point that decision loss is more informative here is not only about the gradient's direction, but also the changing speed of the loss. And the example we give in the appendix shows, the cross-entropy loss will hardly change in high-confidence areas, which causes that it is not suitable for visualization in 2D and 3D cases, e.g. in Fig.12 (a). \\n\\n2. In the original paper, our CIFAR10 results are obtained on a ConvNet (which we have given in the first version in Appendix Sec. 8.3). And the further provided results in rebuttal (Appendix in Table.4 ) is the original MinMax's ResNet model's test results, as required by the reviewer. That's why they are different.\\n\\nHope this clarifies your concerns. Thanks a lot for the reviews!\"}",
"{\"title\": \"Points taken\", \"comment\": \"I agree that adversarial training is expensive and the proposed method offers a cheaper alternative. However I still think the related work needs to be discussed and compared to, given the strong similarities to the other Jacobian regularizers in the literature.\\n\\nI acknowledge the paper's various contributions. To be clear, I think this paper hits on a number of interesting issues. However I think some of the experiments feel incomplete or hard to interpret. I also think the evaluation of the proposed method is incomplete. I'd like to know how it compared to other Jacobian regularizers. Does regularizing the decision loss Jacobian really beat the cross entropy loss? I know there's a conceptual argument to be made in support of the decision loss, but I just don't feel like the experiments currently back that up. For this reason the paper feels a bit incomplete to me.\"}",
"{\"title\": \"I'm unclear about the response to comment 1 & 4\", \"comment\": \"1) I see your point about the decision surface. You make an interesting argument that changes in the loss can be quite small when confidence increases. However, it could be that small changes in the confidence are meaningful even though they're small. I *still* want to see what in the blank holes in Figure 4. I understand the the loss is small in this region, however the only reason there's a blank region is because of how you (arbitrarily) chose to draw the contours at too high a level to see what's going on.\\n You make an argument here that the decision surface is more informative because the gradient direction contains more information than the loss gradient. However, I can't see the loss gradient in your figure because the contours start at too high a value, so I don't see how this figure backs up this hypothesis.\\n\\n4) My concern was that the original MinMax model is optimized for epsilon=8. The method \\\"outs+advtrain\\\" in Table 2 out-performs MinMax for small epsilon. However the table might turn if you compared against a MinMax optimized for smaller epsilon (in this case MinMax might beat the proposed method for epsilon=3). Could you please clarify how the results you show in your rebuttal differ from the paper? You seem to claim this model is *also* trained with epsilon-8, so what did you change?\"}",
"{\"title\": \"About 'Normalization and Blank space'\", \"comment\": \"Thanks for the reviewer's comments! Here are our clarifications:\\n\\n1. The normalization in weight space is also done by sign(*) operation. But the step size here is different (previously in input image space, step_size=1 pixel), here step_size is set to 0.01. The normalization is done by following formula: \\\\alpha = 0.01 * sign(\\\\alpha). (same as \\\\beta)\\n\\n2. About the blank area question, I think there might be some misunderstanding here. Fig.11 shows loss surfaces and decision surfaces both in input space instead of parameter space. And Appendix-1 is trying to explain the reason of why the blank area of loss surface in input space produces, and thus why the decision surface is better than loss surface. \\n\\n3. About the different normalization methods, [1] proposed that different normalization could influence the width of the loss surfaces in weight space. But using normalization method in 1. with a proper step size, the loss surface is already capable to show very different features in Fig.16 (a)-(d). Therefore, we didn't use [1]'s proposed method. \\n\\nThank you very much for your comments. We will make Fig.11 clearer by adding \\\"loss surfaces and decision surfaces (both in input space)\\\"\"}",
"{\"title\": \"Figure 16 shows that loss surface in weights space still works for adversarial examples in certain cases.\", \"comment\": \"Thanks for the clarification. Figure 16 is a good example showing that loss surface in weights space still works for some adversarial attacks. Actually for the strong attacks show in Figure 16(d), the loss surface of robust model seems seems still better than the natural model as the red contour area is much smaller.\\n\\nFigure 16 is actually more informative than Figure 2. I would suggest the authors to make it clear in the main text about the results provided by Figure 16, i.e., the loss surface in weight space does not always fail for showing model's robustness to adversarial attacks. More in-depth analysis about the reason for this result could be better.\\n\\nI was asking how are x-axes(\\\\alpha) and y-axes (\\\\beta) normalized for loss surface visualization in weight space for Fig. 2, Fig.11 and Fig 16? As indicated in [1], different normalization method used could show different width of loss contours. I was wondering whether the blank space is caused by different normalization method.\\n\\n[1] Visualizing the loss landscape of neural nets, Li et al, NIPS, 2018\"}",
"{\"title\": \"Review of \\\"Interpreting Adversarial Robustness: A View from Decision Surface in Input Space\\\"\", \"review\": \"This paper argues that analyzing loss surfaces in parameter space for the purposes of evaluating adversarial robustness and generalization is ineffective, while measuring input loss surfaces is more accurate. By converting loss surfaces to decision surfaces (which denote the difference between the max and second highest logit), the authors show that all adversarial attack methods appear similar wrt the decision surface. This result is then related to the statistics of the input Jacobian and Hessian, which are shown to differ across adversarially sensitive and robust models. Finally, a regularization method based on regularizing the input Jacobian is proposed and evaluated. All of these results are shown through experiments on MNIST and CIFAR-10.\\n\\nIn general, the paper is clear, though there are a number of typos. With respect to novelty, some of the experiments are novel, but others, including the improved training method, have been explored before (see specific comments for references). Finally, regarding significance, many of the insights provided this paper are true by definition, and are therefore unlikely to have a significant impact. \\n\\nWhile I strongly believe that rigorous empirical studies of neural networks are essential, this paper is lacking in several key areas, including framing, experimental insights, and relation to prior work, and is therefore difficult to recommend. Please see the comments below for more detail.\", \"major_comments\": \"1) In the beginning of the paper, adversarial robustness and generalization are equated. However, adversarial robustness and generalization are not necessarily equivalent, and in fact, several papers have provided evidence against this notion, showing that adversarial inputs are likely to be present even for very good models [5, 6] and that adversarially sensitive models can often generalize quite well [2]. Moreover, all the experiments within the paper only address adversarial robustness rather than generalization to unperturbed samples.\\n\\n2) One of the main results of this paper is that the loss surface wrt input space is more sensitive to adversarial perturbations than the loss surface wrt parameter space. Because adversarial inputs are defined in input space, by definition, the loss surface wrt to the input must be sensitive to adversarial examples. This result therefore appears true by definition. Moreover, [3] related the input Jacobian to generalization, finding a similar result, but is not discussed or cited.\\n\\n3) The main result of Section 3 is that all adversarial attacks \\u201cutilize the decision surface geometry properties to cross the decision boundary within least distance.\\u201d While to my knowledge the decision surface visualization is novel and might have important uses, this statement is again true by definition, given that adversarial attack methods try to find the smallest perturbation which changes the network decision. As a result, all methods must find directions which are short paths in the decision surface. It is therefore unclear what additional insight this analysis presents. \\n\\n4) How does measuring the loss landscape as an indicator for adversarial robustness differ from simply trying to find adversarial examples as is common? If anything, it seems it should be more computationally expensive as points are sampled in a grid search vs optimized for. \\n\\n5) The proposed regularizer for adversarial robustness, based on regularizing the input Jacobian, is very similar to what was proposed in [1], yet [1] is not discussed or cited.\", \"minor_comments\": \"1) The paper\\u2019s first sentence states that \\u201cIt is commonly believed that a neural network\\u2019s generalization is correlated to ...the flatness of the local minima in parameter space.\\u201d However, [4] showed several years ago that the local minima flatness can be arbitrarily rescaled and has been fairly influential. While [4] is cited in the paper, it is only cited in the related work section as support for the statement that local minima flatness is related to generalization when this is precisely opposite the point this paper makes. [4] should be discussed in more detail, both in the introduction and the related work section.\\n\\n2) The paper is quite lengthy, going right up against the hard 10 page limit. While this may be acceptable for papers with large figures or which require the extra space, this paper does not currently meet that threshold. \\n\\n3) Throughout the figures, axes should be labeled. \\n\\n4) In section 2.2, it is stated that both networks achieve optimal accuracy of ~90% on CIFAR-10. This is not optimal accuracy and hasn\\u2019t been for several years [7].\\n\\n5) Why is equation 2 calculated with respect to the logit layer vs the normalized softmax layer? Using the unnormalized logits may introduce noise due to scaling.\\n\\n6) In Figure 8, the scales of the Hessian are extremely different. Does this impact the measurement of sparseness?\", \"typos\": \"1) Introduction, second paragraph: \\u201cFor example, ResNet model usually converges to\\u2026\\u201d should be \\u201cFor example, ResNet models usually converge to\\u2026\\u201d\\n\\n2) Introduction, second paragraph: \\u201c...defected by the adversarial noises...\\u201d should be \\u201c...defected by adversarial noise\\u2026\\u201d\\n\\n3) Introduction, third paragraph: \\u201c...introduced by adversarial noises...\\u201d should be \\u201c...introduced by adversarial noise\\u2026\\u201d\\n\\n4) Section 3.1, first paragraph: \\u201ccross entropy based loss surface is\\u2026\\u201d should be \\u201ccross entropy based loss surfaces is\\u2026\\u201d\\n\\n[1] Jakubovitz, Daniel, and Raja Giryes. \\\"Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization.\\\" arXiv preprint arXiv:1803.08680 (2018). ECCV 2018.\\n[2] Zahavy, Tom, et al. \\\"Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms.\\\" arXiv preprint arXiv:1602.02389 (2016). ICLR Workshop 2018\\n[3] Novak, Roman, et al. \\\"Sensitivity and generalization in neural networks: an empirical study.\\\" arXiv preprint arXiv:1802.08760 (2018). ICLR 2018.\\n[4] Dinh, Laurent, et al. \\\"Sharp Minima Can Generalize For Deep Nets.\\\" International Conference on Machine Learning. 2017.\\n[5] Fawzi, Alhussein, Hamza Fawzi, and Omar Fawzi. \\\"Adversarial vulnerability for any classifier.\\\" arXiv preprint arXiv:1802.08686 (2018). NIPS 2018.\\n[6] Gilmer, Justin, et al. \\\"Adversarial spheres.\\\" arXiv preprint arXiv:1801.02774 (2018). ICLR Workshop 2018.\\n[7] http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Reply to \\\"good visualization ...\\\"\", \"comment\": \"We have updated our submitted paper to address your concerns in Sec.2.2 and in Appendix 8.5.\\nThanks a lot for the reviewer's suggestions!\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nWe thank the reviewers for liking the visualization idea!\\n\\n1.\\tAbout your first concern that \\u201cThe claim of \\u2018these significant distinctions indicate the ineffectiveness of generalization estimation from the loss surfaces in parameter space\\u2019 are not well supported\\u201d, please let us provide some clarification. We have provided the natural model and robust MinMax model\\u2019s contour maps on natural input and adversarial inputs and their Visualizations in the Appendix (the updated version) 8.5, Fig.16.\\n\\nAs expected, in parameter space, natural model's loss surface on adversarial inputs has a larger base height than the robust model, i.e. the average loss values are higher than robust models. But such gap is only obvious on weak attacks, like FGSM. When we use stronger attacks like C\\\\&W attack, the loss surface of the natural model and robust model become similar again: Both models' surfaces demonstrate high cross-entropy loss with no obvious distinction.\\n\\nTherefore, as mentioned in the review, we can indeed use the loss surface in weight space to show their robustness difference if they are both plotted with weak adversarial inputs. But when we are facing stronger iterative attacks, the loss surface in weight space can no longer show any difference, thus cannot indicate the model robustness. \\n\\nBy contrast, our input space loss surfaces can explicitly show the model robustness difference with no such restrictions, and the robustness difference can also be more obviously demonstrated, as shown in main paper Fig.3. Therefore, we think this is the advantage of using input space loss surface to indicate the model robustness. We will absolutely update the statement in the main paper. \\n\\n2.\\tIn Fig.3, both x-axes (alpha) in (a) and (b) are chosen as random directions with normalization, and both y-axes (beta) are chosen as FGSM attack direction [1]. \\nIn Fig.4, all x-axes (alpha) in (a)-(d) are chosen as random directions with normalization, and y-axes are as following (formulas in Eq.3): (a) random direction, (b) FGSM attack direction [1], (c) Least-likely class attack direction [1], (d) C&W attack direction [2], all with normalization.\\n\\n3.\\tThe gradients and random noises are normalized as in Fast gradient sign method [1], which is the pixel-wise sign, i.e. beta = sign(beta).\\n\\nWe thank the reviewer for the constructive comments on the first point, and we will absolutely update the statement to be more accurate. \\n\\n[1] Adversarial examples in the physical world, Alexey Kurakin et al, 2016.\\n[2] Towards evaluating the robustness of neural networks. Carlini, Nicholas, and David Wagner. IEEE (SP), 2017.\"}",
"{\"title\": \"good visualization for adversarial robustness analysis, unclear loss surface in weight space with adversarial data\", \"review\": \"The authors demonstrated that the loss surface visualized in parameter space does not reflect its robustness to adversarial examples. By analyzing the geometric properties of the loss surface in both parameter space and input space, they find input space is more appropriate in evaluating the generalization and adversarial robustness of a neural network. Therefore, they extend the loss surface to decision surface. They further visualized the adversarial attack trajectory on decision surfaces in input space, and formalized the adversarial robustness indicator. Finally, a robust training method guided by the indicator is proposed to smooth the decision surface.\\n\\nThis paper is interesting and well organized. The idea of plotting loss surface in input space seems to be a natural extension of the loss surface w.r.t to weight change. The loss surface in input space measures the network\\u2019s robustness to the perturbation of inputs, which naturally shows the influence of adversarial examples and is suitable for studying the robustness to adversarial examples. \\n\\nNote that loss surface in parameter space measures the network\\u2019s robustness to the perturbation of the weights with given inputs, which implicitly assumes the data distribution is not significantly changed so that the loss surface may have similar geometry on the unseen data. \\n\\nThe claim of \\u201cthese significant distinctions indicate the ineffectiveness of generalization estimation from the loss surfaces in parameter space\\u201d are not well supported as the comparison between Figure 2(a) and Figure 3 seems to be unfair and misleading. Fig 2 are plotted based on input data without any adversarial examples. So it is expected to see that Fig 2(a) and Fig 2(b) have similar contours. However, the loss surface in weight space may still be able to show their difference if the they are both plotted with the adversarial inputs. I believe that models trained by Min-Max robust training will be more stable in comparison with the normally trained models. It would be great if the author provide such plots. I would expect the normal model to have a high and flat surface while the robust model shows reasonable loss with small changes in weight space.\\n\\nHow to choose \\\\alpha and \\\\beta for loss surface of input space for Fig 3 and Fig 4 (first row)?\\n\\nHow are \\\\alpha and \\\\beta normalized for loss surface visualization in weight space as in Eq 1?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Answers to reviewer's concerns\", \"comment\": \"We have updated our submitted paper and added the MNIST and CIFAR experimental results in Appendix 8.6.\\nThanks a lot for the reviewer's suggestions for our experiments!\\n\\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nWe thank the reviewer for your interests in our visualization results!\\n\\n1)\\tAbout your first concern on our statement, that decision loss is more informative, there are two reasons: \\na)\\tAbout the big hole, we give a simple example and two illustration figures in Appendix 8.1. The \\u201cless informative\\u201d blank region is caused by the non-linear operations (soft-max and entropy). Specifically, when a neural network has high confidence in correct logits, the cross-entropy could hardly describe the confidence information: \\nFor example, ten-class model with nine pairs of different logit outputs [1, 1, 1, \\u2026, 1, 1], [2, 1, 1, \\u2026, 1, 1], \\u2026. [9, 1, 1, \\u2026, 1, 1];\\nThe corresponding cross-entropy loss is [2.30, 1.46, 0.79, 0.37, 0.15, 0.05, 0.02, 0.01, 0.01];\\nThe corresponding confidence defined in Eq.2 is [0, 1, 2, 3, 4, 5, 6, 7, 8];\\nWith further increasing of logits confidence (model confidence could easily achieve over 20 in common NN models), the cross-entropy loss hardly changes anymore. But the confidence\\u2019s change is stable. That\\u2019s why the blank region appears in cross-entropy loss surface but not in decision surface. Therefore, we state the decision confidence surfaces could provide more geometry information in the \\u201cblank region\\u201d of cross-entropy loss surface. \\nb)\\tSecond, the decision boundary loss surface contains the explicit decision boundary (contour line L=0), across which the model\\u2019s prediction result will be flipped. By contrast, cross-entropy loss surface has no such explicit decision boundary. This is one very important property since this enables us to visualize and evaluate the attack strength needed to conduct a successful adversarial attack against the model.\\nTherefore with these two points, we claim that decision surface is more informative than cross-entropy loss surface.\\n\\n2)\\tAbout how to distinguish our work from Ross & Doshi [1], the same part is we use the same Loss_ce + Loss_grad idea but different regularizer design in Loss_grad. We choose to penalize decision boundary loss\\u2019s (in Eq.2) gradients while Ross & Doshi use common cross-entropy loss\\u2019s gradients. The benefit of doing so is related to the problem we mentioned in above part 1(a). Because of the cross-entropy loss involves highly non-linear soft-max and entropy operation compared to the decision loss, the changing of cross-entropy loss is negligible in high confidence cases (as we mentioned before) but decision loss has no such drawbacks. The non-linear operations will also cause the gradient of cross-entropy loss is relatively small, which brings constraints on the gradient penalty effect. As the comparison experiments in Table1 and 2 showed, our decision loss gradient regularizer outperforms cross-entropy loss regularization a lot on both MNIST and CIFAR10. \\n\\n3)\\tAbout the 3rd and 4th concerns, the reason we omit the MinMax model on MNIST is purely space consideration. Here we comment with the missing part our Ours+Adv Training, and MinMax\\u2019s experimental results. We will update our version added with these MNIST experimental results.\\n\\nAttack | Natural ----FGSM---- ----BIM---- ----C&W----\\nEpsilon\\t| 0 |0.1|0.2|0.3| 0.1|0.2|0.3| 0.1|0.2|0.3\\nOurs+Adv| 95.9 | 87.6|72.2|44.1| 89.2|67.2|28.4| 89.6|73.2|39.5\\nMinMax\\t| 98.4 |97.3|96.3|95.2| 97.2|94.3|92.8| 97.6|96.4|94.5\\n\\nThe MinMax model is absolutely the most robust model, which is the reason why we choose its model to analyze the decision surface geometry. In Fig.6, the (eps < 0.3) region of MinMax model\\u2019s decision surface is nearly flat with no downhills under both random noises and adversarial attacks, which makes it near immune to adversarial attacks. \\n\\n4)\\tAnd about the final concern that on CIFAR10, will original MinMax released model be dominant in small perturbation attacks, below we show the original released model\\u2019s (trained with eps=8) performance under the attacks:\\n\\nAttack\\tNatural\\t ----FGSM---- ----BIM-----\\t ---C&W---\\nEpsilon\\t0\\t | 3| 6|8|9|\\t |3|6|8|9|\\t |3|6|8|9|\\nAcc\\t 87.3\\t |75.3|63.2|56.1|53.4| |74.2|59.3|48.7|46.2| |74.2|\\t59.2|49.8|46.1|\\n\\nThe released original model did not show dominant accuracies on small perturbations above our methods on CIFAR10. The robustness gap is within 10% robustness excluding the model baseline accuracy difference (baseline accuracy difference is about 4% compared to Ours+AdvTrain).\"}",
"{\"title\": \"Answers to reviewer's concerns\", \"comment\": \"5)\\tAnd we still need to claim two points about why our method is still meaningful here:\\na)\\tThe training cost of MinMax makes it hard to scale. MinMax is commonly known that this method cannot generalize to large-scale datasets [2], e.g. ImageNet, since every training step in MinMax needs to generate PGD adversarial examples through 10-30 backpropagations. This makes training large-scale MinMax robust models impractical. But our method has better scaling ability than MinMax, the time consumption by double-backpropagation per training step is about 2.1 times than normal training, which is thus 5-15 times less than MinMax. \\nb)\\tMeanwhile, on CIFAR10, the gap between MinMax and our method is not that large. Especially, the robustness gap under eps=3 attacks (FGSM, BIM, C&W) is negligible as shown in Table.2. And about the robustness degradation under larger step attack, our reason analysis is stated in Sec 5.2, that because Taylor Approximation performs well in a small neighborhood but has limitations against larger step attack, which is a limitation of our method which we also talked in the paper. \\n\\n6)\\tLastly, as our paper named \\\"Interpreting Adversarial Robustness: A View from Decision Surface in Input Space\\\", we sincerely hope that reviewer could also take our paper\\u2019s other contributions into consideration, like revealing the nature of adversarial examples and robustness are actually solving NN\\u2019s neighborhood underfitting issue, the shared mechanism of various adversarial attacks by decision loss surface visualization and interpretation, proof of the relationship between loss geometry and adversarial robustness by Jacobian and Hessian\\u2019s geometry properties, etc., and we believe our paper is a thorough analysis and interpretation work in current interpreting adversarial attack and robustness research. \\n\\nAgain, we thank the reviewer for the detailed reviews!\\n\\n[1] Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In AAAI, 2018.\\n[2] Kannan, Harini, Alexey Kurakin, and Ian Goodfellow. \\\"Adversarial Logit Pairing.\\\" arXiv preprint arXiv:1803.06373 (2018).\"}",
"{\"title\": \"An interesting visualization paper, but not always so convincing\", \"review\": \"This paper uses visualization methods to study how adversarial training methods impact the decision surface of neural networks. The authors also propose a gradient-based regularizer to improve robustness during training.\", \"some_things_i_liked_about_this_paper\": \"The authors are the first to visualize the \\\"decision boundary loss\\\". I also find this to be a better and more thorough study of loss functions than I have seen in other papers. The quality of the visualizations is notably higher than I've seen elsewhere on this subject.\", \"i_have_a_few_criticisms_of_this_paper_that_i_list_below\": \"1) I'm not convinced that the decision surface is more informative than the loss surface. There is indeed a big \\\"hole\\\" in the middle of the plots in Figure 4, but it seems like that is only because the first contour is drawn at too high a level to see what is going on below. More contours are needed to see what is going on in that central region. \\n2) The proposed regularizer is very similar to the method of Ross & Doshi. It would be good if this similarity was addressed more directly in the paper. It feels like it's been brushed under the rug.\\n3) In the MNIST results in Table 1: These results are much less extensive than the results for CIFAR. It would especially be nice to see the MinMax results since those of commonly considered to be the state of the art. The fact that they are omitted makes it feel like something is being hidden from the reader.\\n4) The results of the proposed regularization method aren't super strong. For CIFAR, the proposed method combined with adversarial training beats MinMax only for small perturbations of size 3, and does worse for larger perturbations. The original MinMax model is optimized for a perturbation of size 8. I wonder if a MinMax result with smaller epsilon would be dominant in the regime of small perturbations.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Brief summary of our paper.\", \"comment\": \"Thanks for your interests! Our paper mainly has following contributions about adversarial examples and adversarial robustness (In case that you are not familiar with adversarial examples, please refer to [1-3] for some preliminary knowledge.):\\n\\n1. By visualizing different adversarial attacks\\u2019 trajectories in Fig.4, We show the nature of adversarial examples, which are NOT really \\u201cgenerated\\u201d by adversarial attacks but are \\u201cnaturally existed\\u201d examples which are wrongly classified by the neural network. Therefore, we summarize the adversarial examples phenomenon as the neural network\\u2019s \\u201cneighborhood underfitting\\u201d issues, instead of an \\u201cattack\\u201d problem. Meanwhile, different adversarial attacks share the same mechanism which is to find the shortest paths to cross the decision boundaries and enter the wrong classification region of neural networks as shown in Fig.4. \\n\\n2. With regard to adversarial robustness, our work shows in adversarial settings, the neural network\\u2019s parameter space geometry has no close relationship to its robustness, as shown in Fig.1, 2, 3. By contrast, the neural network\\u2019s input space geometry can clearly indicate a model\\u2019s robustness: wide and flat plateau with gentle slopes usually indicates better model robustness, as shown in Fig.6, 7.\\n\\n3. We also mathematically prove 2\\u2019s conclusion by second-order Taylor Approximation and Jacobian & Hessian\\u2019s differential geometry properties, that is, the lower magnitude of Jacobian and Hessian Eigenvalues of the network (w.r.t the input) indicates smoother input space geometry, as well as better robustness. \\n\\n4. Based on 3\\u2019s conclusion, we therefore propose a robust training by regulating the L2 norm of Jacobian (since smaller Jacobian leads to better robustness). And the experiments compare our robustness training performance with other state-of-the-art robustness enhancement methods.\\n\\nThanks again for your interests in our paper. If you have further questions, please don\\u2019t hesitate to comment. \\n\\n\\n\\n[1] Szegedy, Christian, et al. \\\"Intriguing properties of neural networks.\\\" arXiv preprint arXiv:1312.6199 (2013).\\n[2] Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. \\\"Adversarial examples in the physical world.\\\" arXiv preprint arXiv:1607.02533 (2016).\\n[3] Carlini, Nicholas, and David Wagner. \\\"Towards evaluating the robustness of neural networks.\\\" 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017.\"}"
]
} |
|
HJlNpoA5YQ | The Laplacian in RL: Learning Representations with Efficient Approximations | [
"Yifan Wu",
"George Tucker",
"Ofir Nachum"
] | The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent. | [
"Laplacian",
"reinforcement learning",
"representation"
] | https://openreview.net/pdf?id=HJlNpoA5YQ | https://openreview.net/forum?id=HJlNpoA5YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeiPjWllE",
"SJe3zTf6p7",
"BJlAT9zapm",
"HkxzFKz6pX",
"ryl6Crxo3m",
"Hylrlueq3Q",
"SJxZV3b82m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544719202525,
1542429972328,
1542429382409,
1542429049951,
1541240277507,
1541175276992,
1540918313181
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper793/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper793/Authors"
],
[
"ICLR.cc/2019/Conference/Paper793/Authors"
],
[
"ICLR.cc/2019/Conference/Paper793/Authors"
],
[
"ICLR.cc/2019/Conference/Paper793/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper793/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper793/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides a novel and non-trivial method for approximating the eigenvectors of the Laplacian, in large or continuous state environments. Eigenvectors of the Laplacian have been used for proto-value functions and eigenoptions, but it has remained an open problem to extend their use to the non-tabular case. This paper makes an important advance towards this goal, and will be of interest to many that would like to learn state representations based on the geometric information given by the Laplacian.\\n\\nThe paper could be made stronger by including a short discussion on why the limitations of this approach. Its an important new direction, but there must still be open questions (e.g., issues with the approach used to approximate the orthogonality constraint). It will be beneficial to readers to understand these issues.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Well-written paper and a useful extension to approximating the eigenvectors of the Laplacian\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the careful reading of the paper. We are glad the reviewer found the contribution of the paper insightful and original. Responses to the reviewer\\u2019s questions are below:\\n\\n\\u201cit would be good if the authors could comment on the choice of d. This is in fact a model selection problem. According to which criterion is this selected?\\u201d\\n\\n-- Our choice of d(=20) in reward shaping experiments is arbitrary and we didn\\u2019t tune it. In practice, if the downstream task is known, d can be regarded as a hyperparameter and selected according to the performance. If the downstreaming task is not available, one can visualize the distances between representations like in Figure 4 (with randomly sampled goal states) and select d when the visualized distance is meaningful; or in other cases treat it as an additional hyperparameter to search over.\\n\\n\\n\\u201cthe authors define D(u,v) in eq (4). Why this choice? Is there some intuition or interpretation possible related to this expression?\\u201d\\n\\n-- The underlying motivation is in order to make the graph drawing objective practical to optimize (via sampling) while reflecting the affinity between states. Optimizing the graph drawing objective requires sampling from D(u,v)rho(u)rho(v) so D(u,v)rho(u)rho(v) should be a joint measure over u, v. The Laplacian is defined for undirected graphs so D(u,v) also needs to be symmetric. These are the intuitions behind the conditions for D in Section 2.2. In RL, a natural choice for representing the affinity between two states is to use the transition probabilities P(u|v) (which is also convenient for sampling). However, naively setting D := P is premature, as P in general does not satisfy the conditions necessary for D. To this end, we first \\u201csymmetrize\\u201d P to achieve the setting of D as in Eq 4 by averaging the transitions u->v and v->u This procedure is analogous to \\u201csymmetrized Laplacians\\u201d (see Boley, et al., \\u201cCommute times for a directed graph using an asymmetric Laplacian\\u201d). We then divide it by rho to make D(u,v)rho(u)rho(v) a joint measure over pairs of states so that the graph drawing objective can be written in terms of an expectation (as in (5)) and sample based optimization is possible. \\n\\n\\n\\u201cin (6) beta is called a Lagrange multiplier. Given that a soft constraint (not a hard constraint) is added for the orthonormality constraint it is not a Lagrange multiplier.\\u201d\\n\\n-- We have updated the paper to replace this terminology with the more appropriate \\u201cKKT multiplier\\u201d.\\n\\n\\n\\u201cHow sensitive are the results with respect to the choice of beta in (6) (or epsilon in the eq above)? The orthonormality constraint will only be approximately satisfied. Isn't this a problem?\\u201d\\n\\n-- The results are not very sensitive to the choice of beta. We have plots for approximation qualities with different values of beta in Appendix D-1 Figure-7 with discussions.\\n-- Approximately satisfying the orthonormality constraint is not a problem in RL applications, at least in the reward shaping setting which we experiment with. In reward shaping the important thing is that the distance in the latent space can reflect the affinity between states properly, and orthonormality constraint plays a role more like encouraging the diversity of the representations (preventing them from collapsing to a single point). We think the same argument applies to most other applications of learned representations to RL so only satisfying the constraint approximately should not be a problem in the RL context. \\n\\n\\n\\u201cWouldn't it be better in this case to rely on optimization algorithm on Grassmann and Stiefel manifolds?\\u201d\\n\\n-- In the RL setting, one requires an optimization algorithm which is amenable to stochastic mini-batching. We are not aware of an optimization algorithm based on Grassman and Stiefel manifolds which is applicable in such settings, but would be interested if the reviewer has a specific algorithm in mind. While our paper proposes one technique for enforcing orthonormality, there are likely to be other applicable algorithms to achieve the same aims, and we would be happy to include references to them as alternative methods.\\n\\n\\n\\u201cOther scalable methods related to kernel spectral clustering (related to subsets/subgraphs and making out-of-sample extensions) were proposed in literature\\u201d\\n\\n-- We updated our paper to cite these two papers in the related work section.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the valuable feedback. We are glad the reviewer found the paper interesting and easy to follow. Responses to the reviewer\\u2019s remaining concerns are addressed below. With these, we hope the reviewer will find the paper more appropriate for publication and, if so, will raise their score accordingly. We are also always happy to discuss further if the reviewer has additional concerns.\\n\\n\\u201clearning such a representation using a random policy might not be ideal because the random policy can not explore the whole state space efficiently\\u201d\\n\\n-- We agree that this can be a concern. However, a random policy can be sufficient for exploration when the initial state is uniformly sampled from the whole state space (as we did in our experiments). As you suggest, a random policy is not sufficient for exploration when the initial state is not sampled from the whole state space but only sampled within a region that is far from the goal. In this case, exploring the whole state space itself is a hard problem which we are not trying to solve here. In this paper, we aim at demonstrating the usefulness of learned representations in \\u201creward shaping\\u201d with well controlled experiments in RL settings, so we attempted to exclude other factors such as exploration. \\n-- With that being said, we have results showing that our representation learning method works beyond random-walk policies: In appendix D-2 we have experiments (Figure-8) showing that the learned representation with online policies provides a similar advantage in reward shaping as with random-walk policies. Here, the online policy and the representation are learned concurrently starting from scratch and on the same online data. It is thus significant that we retain the same advantages in speed of training. \\n\\n\\n\\u201cI am concerned about its sample efficiency and comparing experiments\\u201d\\n\\n-- Even when the pretraining samples are included, our method is much more sample efficient than the baselines. The representation learning phase with a random walk policy is not expensive. For the MuJoCo experiments in Figure 5, we pretrain the representation with 50,000 samples.Then, we train the policy with 250,000(for pointmass)/450,000(for ant) samples. After shifting the mix/fullmix learning curves to the right by 50,000 steps to include the pretraining samples, their learning curves are still clearly above the baseline learning curves.\\n\\n\\n\\u201cthe learnt representation for reward-shaping is fixed to one goal, can one do transfer learning/multi-task learning to gain the benefit of such an expensive step of representation learning with a random policy\\u201d\\n\\n- Our learnt representation is not fixed to one goal and are in fact agnostic to goal or task reward. Thus, the representations may be used for any goals in subsequent training. The goal is used only when computing the rewards (L2 distances) for training goal-achieving policies.\\n- The representation learning phase is not expensive compared with the policy training phase, as we explained in the previous concern point.\\n - The representations are learned in a purely unsupervised way without any task information (e.g. goal, reward, a good policy). So it is natural to apply the representations to different tasks without the notion of \\u201ctransfer\\u201d or \\u201cmulti-task\\u201d.\\n\\n\\n\\u201cThe second equation, below the text \\\"we rewrite the inequality as follows\\\" in page 5, is correct?\\u201d\\n\\n-- Yes, it is correct. The square is outside the brackets in all of the expressions, so E(X)^2 = E(X)E(X).\\n\\n\\n\\u201cAbout the performance reported in Section 5.1, I wonder if the gap can be closer to zero if more eigenfunctions are used?\\u201d\\n\\n-- We have additional results for larger values of d (50, 100) in Appendix D-1, Figure 6. The gap actually becomes bigger if more eigenfunctions are used: With much larger values of d the problem becomes harder as you need to approximate (the subspace of) more eigenfunctions of the Laplacian.\"}",
"{\"title\": \"Response\", \"comment\": \"We are glad that the reviewer found the paper interesting, well-written, and well-evaluated. We also appreciate the feedback.\\n\\nWith regards to the methods DQN and DDPG, we have updated the paper to include references in the main text and brief descriptions of these algorithms in the experiment details section in Appendix.\\n\\nWe have updated the paper to clarify the reasoning behind the half-half mix for reward shaping. By \\u201cgradient,\\u201d we meant the change in rewards between adjacent states (not the gradient in optimization). When the L2 distance between the representations of the goal state and adjacent states is small the Q-function can fail to provide a significant signal to actually reach the goal state (rather than a state that is just close to the goal). Thus, to better align the shaped reward with the task directive, we use a half-half mix, which clearly draws a boundary between the goal state and its adjacent states (as the sparse reward does) while retaining the structure of the distance-shaped reward.\"}",
"{\"title\": \"well written, interesting approach, well evaluated\", \"review\": \"This works proposes a scalable way of approximating the eigenvectors of the Laplacian in RL by optimizing the graph drawing objective on limited sampled states and pairs of states. The authors empirically show the benefits of their method in two different types of goal achieving task.\", \"pros\": [\"Well written, well structured, an overall enjoyable read.\", \"The related work section appears to be comprehensive and supports the motivations for the presented work.\", \"Clear and rigorous derivations.\", \"The method is evaluated both in terms of how well it is able to approximate the optimal Laplacian-based representations with limited samples compared to baseline models and how well it solves reward shaping in RL.\"], \"cons\": [\"In the experimental section, the methods used to learn the policies, DQN and DDPG, should be briefly explained or at least referenced.\", \"A further discussion on why the authors chose a half-half mix of the L2 distance and sparse reward could be beneficial. The provided explanation (L2 distance doesn't provide enough gradient) is not very convincing nor justified.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"needs improvement\", \"review\": [\"Summary: This paper proposes a method to learn a state representation for RL using the Laplacian. The proposed method aims to generalize previous work, which has only been shown in finite state spaces, to continuous and large state spaces. It goes to approximate the eigenvectors of the Laplacian which is constructed using a uniformly random policy to collect training data. One use-case of the learnt state representation is for reward-shaping that is said to accelerate the training of standard goal-driven RL algorithms.\", \"In overall, the paper is well written and easy to follow. The idea that formulates the problem of approximating the Laplacian engenfunctions as constraint optimization is interesting. I have some following major concerns regarding to the quality and presentation of the paper.\", \"Though the idea of learning a state representation seems interesting and might be of interest within the RL research, the authors have not yet articulated the usefulness of this learnt representation. For larger domains, learning such a representation using a random policy might not be ideal because the random policy can not explore the whole state space efficiently. I wish to see more discussions on this, e.g. transfer learning, multi-task learning etc.\", \"In terms of an application of the learnt representation, reward-shaping looks interesting and promising. However I am concerned about its sample efficiency and comparing experiments. It takes a substantial amount of data generated from a random policy to attain such a reward-shaping function, so the comparisons in Fig.5 are not fair any more in terms of sample efficiency. On the other hand, the learnt representation for reward-shaping is fixed to one goal, can one do transfer learning/multi-task learning to gain the benefit of such an expensive step of representation learning with a random policy.\", \"The second equation, below the text\\\"we rewrite the inequality as follows\\\" in page 5, is correct? this derivation is like E(X^2) = E(X) E(X)?\", \"About the performance reported in Section 5.1, I wonder if the gap can be closer to zero if more eigenfunctions are used?\", \"================\"], \"after_rebuttal\": \"Thanks the authors for clarification. I have read the author's responses to my review. The authors have sufficiently addressed my concerns. I agree with the responses and decide to change my overall rating\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The Laplacian in RL: Learning Representations with Efficient Approximations\", \"review\": \"The authors propose a Laplacian in the context of reinforcement learning, together with learning the representations. Overall the authors make a nice contribution. The insight of defining rho to be the stationary distribution of the Markov chain P^pi and connecting this to eq (1) is interesting. Also the definition of the reward function on p.7 in terms of the distance between phi(s_{t+1}) and phi(z_g) looks original. The method is also well illustrated and compared with other methods, showing the efficiency of the proposed method.\", \"on_the_other_hand_i_also_have_further_comments_and_suggestions\": \"- it would be good if the authors could comment on the choice of d. This is in fact a model selection problem. According to which criterion is this selected?\\n\\n- the authors define D(u,v) in eq (4). Why this choice? Is there some intuition or interpretation possible related to this expression?\\n\\n- in (6) beta is called a Lagrange multiplier. Given that a soft constraint (not a hard constraint) is added for the orthonormality constraint it is not a Lagrange multiplier.\\n\\nHow sensitive are the results with respect to the choice of beta in (6) (or epsilon in the eq above)? The orthonormality constraint will only be approximately satisfied. Isn't this a problem?\\n\\nWouldn't it be better in this case to rely on optimization algorithm on Grassmann and Stiefel manifolds?\\n\\n- The authors provide a scalable approach related to section 2 by stochastic optimization. Other scalable methods related to kernel spectral clustering (related to subsets/subgraphs and making out-of-sample extensions) were proposed in literature, e.g.\\n\\nMultiway Spectral Clustering with Out-of-Sample Extensions through Weighted Kernel PCA, IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2), 335-347, 2010.\\n\\nKernel Spectral Clustering for Big Data Networks, Entropy, Special Issue: Big Data, 15(5), 1567-1586, 2013.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkgEaj05t7 | On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length | [
"Stanisław Jastrzębski",
"Zachary Kenton",
"Nicolas Ballas",
"Asja Fischer",
"Yoshua Bengio",
"Amos Storkey"
] | The training of deep neural networks with Stochastic Gradient Descent (SGD) with a large learning rate or a small batch-size typically ends in flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. This was found to correlate with a good final generalization performance. In this paper we extend previous work by investigating the curvature of the loss surface along the whole training trajectory, rather than only at the endpoint. We find that initially SGD visits increasingly sharp regions, reaching a maximum sharpness determined by both the learning rate and the batch-size of SGD. At this peak value SGD starts to fail to minimize the loss along directions in the loss surface corresponding to the largest curvature (sharpest directions). To further investigate the effect of these dynamics in the training process, we study a variant of SGD using a reduced learning rate along the sharpest directions which we show can improve training speed while finding both sharper and better generalizing solution, compared to vanilla SGD. Overall, our results show that the SGD dynamics in the subspace of the sharpest directions influence the regions that SGD steers to (where larger learning rate or smaller batch size result in wider regions visited), the overall training speed, and the generalization ability of the final model. | [
"optimization",
"generalization",
"theory of deep learning",
"SGD",
"hessian"
] | https://openreview.net/pdf?id=SkgEaj05t7 | https://openreview.net/forum?id=SkgEaj05t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxqdQ7Je4",
"BkxM5OQS0X",
"HkeS0ToOTm",
"ByxZp6oupm",
"BJe6UaoOam",
"S1gZjQ_5nQ",
"BkenqdEc27",
"HylZPuIoi7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544659825583,
1542957193526,
1542139341224,
1542139320574,
1542139220622,
1541206937149,
1541191827850,
1540216921428
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper792/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper792/Authors"
],
[
"ICLR.cc/2019/Conference/Paper792/Authors"
],
[
"ICLR.cc/2019/Conference/Paper792/Authors"
],
[
"ICLR.cc/2019/Conference/Paper792/Authors"
],
[
"ICLR.cc/2019/Conference/Paper792/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper792/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper792/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers found the paper insightful and the authors explanations well-provided. However the paper would benefit from more systematic empirical evaluation and corresponding theoretical intuition.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good but more study needed\"}",
"{\"title\": \"Revised version: clarifications and additional experiments\", \"comment\": \"We would like to thank again the reviewers for their comments and suggestions for experiments.\", \"summary_of_the_main_changes_to_the_manuscript\": \"* We rephrased parts of the abstract to clarify the motivation and main findings\\n * We clarified parts of the paper based on comments by R1 and R2. Most importantly, we clarified the goal and generality of the NSGD experiments. We also unified the way we refer to the relation between the SGD step and sharpest direction, which \\npreviously was found confusing by R2.\\n * Based on the suggestions by R1 and R2 we run additional experiments using Adam, different initializations and extending results of Sections 3.1 and 4 (NSGD) to sentiment classification task (https://goo.gl/yYM1DG) We included the sentiment classification results in the Appendix, and are open to including other results as well. The results are generally in line with the main text, hopefully highlighting the generality of the main claims. \\n\\nThank you,\\nThe authors\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the positive feedback. We are glad that NSGD experiments were found to be an interesting investigation. Please also find a summary of results of additional experiments we conducted in response to the other reviews here: https://goo.gl/yYM1DG.\", \"edit\": \"We updated now the manuscript and added a summary of the experiments with a more careful analysis of NSGD results on IMDB.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for his valuable comments. Based on yours and other reviewers\\u2019 remarks we run additional experiments using Adam, different initialization schemes and on data from a sentence classification task. We summarized them in https://goo.gl/yYM1DG, and would be happy to add them to the paper. We will address now each point in order.\\n\\n* On generality *\\nOn the whole, our experiments were run on CIFAR-10 and PTB as described in the main text, and CIFAR-100 and Fashion-MNIST as descibed in the Appendix. We also experimented with 4 models (Resnet-32, SimpleCNN, VGG, and LSTM). We therefore believe that our main results describing how the Hessian behaves along the optimization trajectory were supported by a reasonable (compared to similar papers in the domain) set of settings. Please also note that related results were observed in concurrent ICLR submissions [1], [2] and [3]. In particular [2] shows that indeed a measure of curvature (Fisher Information) closely related to the Hessian grows initially very quickly - which confirms some of our observations in 3.1.\\n\\nHaving said that we fully agree that extending the analysis to different initialization and dataset dependence would be desirable. We rerun similar analysis to 3.1 using Adam, different initialization (we compared uniform to normal, with different scaling) and on IMDB (a sentence classification task). These experiment corroborate our main finings.\\n\\n* Extending results to second order methods *\\nWe fully agree that investigating second order methods would be very interesting. Based on your remark as the first step towards this direction we rerun some of the experiments using Adam, see https://goo.gl/yYM1DG. On the whole the main focus of the paper is on SGD, and thus a more extensive study perhaps should left for future work.\\n\\nHessian and regularization. We apologize for the unclear formulation. We wanted to say, that we used regularization when computing the Hessian (e.g. including L2 terms, or sampling dropout mask) if this was also done for computing the loss uring optimization. In this sense we get a more *realistic* estimate and this choice has *no bearing on the computation speed*. We will make this more clear in the revised version of the manuscript. \\n\\nWhat does \\u201cSGD matches curvature\\u201d mean. Let us clarify what we mean by the phrase that SGD finds a region where its steps matches the curvature. Consider projecting SGD step onto the directions corresponding to the largest eigenvalues of the Hessian. Our claim is that along these directions the projection is too large to reduce the loss. Visually, SGD step crosses the minima in the subspace spanned by the sharpest directions. Please also see Fig.1 for an illustration. We agree that wording is confusing, and we will formulate this in the revised version. \\n\\n*NSGD as a poor-mans second order *\\nWe agree that NSGD is a second order method in the sense that it uses second order information to adapt the step-size. It is different from typical second order methods in that it does not seek to minimize loss along the sharpest directions. Instead, NSGD step typically crosses over the minima along the sharpest direction, just like in the case of SGD (in the sense as depicted in Fig. 1, and as discussed in the last Appendix). To further clarify - the goal of this section was to investigate importance of SGD dynamics along the sharpest directions. We did not seek to prove NSGD is a better optimizer than other second order methods, which is why we were inadvertently brief in the discussion about how it differs from other second order methods. We will clarify all of this and in particular note that NSGD is a specific form of a second order method. \\n\\n* Other points *\\nThank you for pointing us to Yao et al. We will add a discussion of Yao et al. to \\u2018Related work\\u2019.\\n\\nYou mentioned that Fig. 1 is not useful. In general, we would like to keep an intuitive depiction of the main findings. Please let us know if you have any suggestions how to improve Fig. 1. \\n---\\n\\nThank you again for your valuable comments, and we will update the manuscript shortly. \\n\\n[1] Gradient Descent Happens in a Tiny Subspace, https://openreview.net/forum?id=ByeTHsAqtX\\n[2] Critical Learning Periods, https://openreview.net/forum?id=BkeStsCcKQ¬eId=BkeStsCcKQ\\n[3] A Walk with SGD: How SGD Explores Regions of Deep Network Loss?, https://openreview.net/forum?id=B1l6e3RcF7¬eId=BylzRFgP2Q\", \"edit\": \"We updated now the manuscript and added a summary of the experiments with a more careful analysis of NSGD results on IMDB.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the valuable comments. The biggest concerns raised are the generalizability of the experimental results and the practical applicability of the analysed SGD variant, NSGD, due to the use of second order information (the top eigenvectors of the Hessian).\\n\\n* Proposing a practical optimizer is not the main goal of the paper *\\nFirst we would like to stress that proposing a practical optimizer was not the goal of the paper. Instead, our goal was to study the Hessian of the training loss along the optimization trajectory, and the relation of the SGD step to the sharpest directions. Experiments on NSGD were run to investigate the importance of this relation for optimization and generalization of neural networks. We agree that some of the formulations (like the opening sentence of Sec.4, or part of the abstract) were confusing in this respect, and we will make it more clear in the revised version. \\n\\nBased on the remarks we run additional experiments using Adam, different initialization schemes, and data from a sentence classification task (including experiments using NSGD). We summarized them in https://goo.gl/yYM1DG, and would be happy to add them to the paper based on the reviewers feedback.\\n\\n*Generality of results*\\nAnother key concern raised is about generality of the results. On the whole, our experiments were run on CIFAR-10 and PTB (results shown in the main text), CIFAR-100 and Fashion-MNIST (results shown in the Appendix). We also experimented with 4 models in total (Resnet-32, SimpleCNN, VGG, and LSTM). We however agree that extending the experiments to different datasets, network architectures and training settings is desirable. Based on the remarks we rerun some of the experiments using different initializations, and for a new sentence classification task.\\n\\nNSGD experiments were conducted on Fashion-MNIST, Cifar-10, Cifar-100 using SimpleCNN and ResNet32 models. The main purpose of these experiments was to show that behavior along sharpest directions can be important for training speed and generalization. We acknowledged in the text that NSGD results might be dataset dependent because the structure of the Hessian is dataset dependent (as shown for instance by Sagun et al, https://arxiv.org/abs/1706.04454). We will make it clearer in the revised version of the manuscript. We also rerun NSGD experiments on a text classification dataset.\\n\\nFurthermore, related results were observed in concurrent ICLR submissions [1], [2], and [3], which further supports generalizability of the results. [1] shows that indeed gradient step is highly aligned with the Hessian from the beginning (which is one of the observations discussed in 3.2). [2] shows that indeed a measure of curvature (Fisher Information Metric) closely related to the Hessian grows initially very quickly. Finally, [3] shows a related phenomena that SGD starts to oscillate early on in training, especially for a large batch-size. [2] and [3] are consistent with our results in 3.1.\\n\\n* NSGD practicality *\\nFinally, we agree that NSGD might be an impractical optimizer, because of its use of second order information. Note however, NSGDs overhead incurred by computing the top eigenvectors of the Hessian is comparable to that of methods like K-FAC, which are considered practical. We will clarify the writing. We also run experiments like in Sec. 3.1 with Adam as an optimizer as a first step towards understanding how the analysis extends to methods adapting to the curvature. \\n\\n--\", \"all_the_aforementioned_additional_results_are_summarized_in_https\": \"//goo.gl/yYM1DG. Do you have any other experiments in mind that you would like us to run?\\n\\nThank you again for your comments, and we will update the manuscript shortly. \\n\\n[1] Gradient Descent Happens in a Tiny Subspace, https://openreview.net/forum?id=ByeTHsAqtX\\n[2] Critical Learning Periods, https://openreview.net/forum?id=BkeStsCcKQ¬eId=BkeStsCcKQ\\n[3] A Walk with SGD: How SGD Explores Regions of Deep Network Loss?, https://openreview.net/forum?id=B1l6e3RcF7¬eId=BylzRFgP2Q\", \"edit\": \"We updated now the manuscript and added a summary of the experiments with a more careful analysis of NSGD results on IMDB.\"}",
"{\"title\": \"Good idea. Not convinced about generalizability of results.\", \"review\": \"Update after author response: I am changing my rating from 4 to 6 in light of the clarification and new experiments.\\n\\n-------\\nIn this paper the authors study the relationship between the SGD step size and the curvature of the loss surface, empirically showing that: 1) SGD is guided towards sharp regions of the loss surface at the start especially with a large learning rate or a small batch size. 2) Loss increases on average when taking a SGD step in the sharpest directions. 3) Modifying the SGD step size in the sharp directions (for example removing its component in the sharpest direction), can lead to substantial changes in both the quality and the local landscape of the minima (for the example mentioned, leading to a better and sharper minima). Motivated by these observations, the authors propose a variant of SGD that leads to better performance on the datasets considered.\\n\\nDeep learning theory is a very important frontier for machine learning and one that\\u2019s needed to make the practice be guided more by the foundational principles than incessant tweaks. The paper makes some very interesting observations and uses those insights to improve the widely used SGD. However, I have a few concerns which leave me unconvinced about the impact of the contributions in the paper. My biggest problem is the use of second order information in the algorithm which makes the optimization process computationally cumbersome, and raises the question as to why might this approach be preferable to any other second order approach (the authors touch on Newton method in the appendix but the discussion far from settles the matter). Similar questions arise in considering the merit of the proposed methods in comparison to a host of other well-studied augmentations to SGD like momentum, Adam or AdaGrad. The quality of presentation is also a problem, and both the organization of the main matter as well as of the figures can use some polishing. The latter specifically sometimes lacked legends (Fig. 3 and 4), and some other times had legends covering a quarter of the plot (Fig. 5). Lastly, even though the claims sound theoretical, they are not derived from any set of first principles but come from observations on a few datasets. While this may after all be how SGD behaves in general, currently the paper doesn\\u2019t provide any evidence to believe that.\", \"minor_issues\": \"\\u201cwithe\\u201d (page 2, spelling), \\u201c\\\\alpha = 0.5, 1, 2 corresponding to red, green, and blue\\u201d (page 4, I believe it should be \\u201cblue, green and red\\u201d).\\n\\nIn summary, even though I liked what the paper set out to do, I am not convinced on the generalizability of these results and subsequently the rationale for using the proposed method over other competing options. A revised version of the paper with either validation on more datasets or sound theory generalizing the results to some extent would make for a much nicer contribution.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"see review\", \"review\": \"The paper discusses connections between the properties of DNN loss surfaces and the step length SGD algorithms take, a timely topic. On the whole, reasonably well done, with some interesting observations.\\n\\nIt makes several claims, most notably that there is an initial regime where SGD visits increasingly sharp regions of the loss surface, followed by a regime where the loss surface gets smoother. Useful to know, and characterized moderately well.\\n\\nA weakness is that the generality of that claim is not made clear. Like many papers in the area, it is an observation, the realm of which is not clarified. E.g., what properties of the neural network or data does it depend on. Also not clarified is how this depends on initialization, etc.\\n\\nThe evaluation should be more systematic, as it is hard to tell how general is the claims of the paper as well as how they depend on implementation details.\\n \\nThe discussion of Hessian directions ignores very relevant work by Yao et al (https://arxiv.org/abs/1802.08241 and follow up).\\n\\nThe first figure in Fig 1 is probably misleading, and probably not worth having, the latter two are what is measured and thus more interesting.\\n\\nThe obvious conclusion from the poor conditioning is that methods designed to addressed poor conditioning, i.e., second order methods, should be considered. Those should have a complementary dynamics to what is discussed. This is what is the elephant in the room when you talk about steering towards or away from regions whose curvature matches the SGD step. \\n\\nI don't know what it means to say \\\"Where applicable, the Hessian is estimated with regularization applied\\\" Is this to speed up computation, why doesn't this change the loss surface, etc. If you are not measuring Hessian information precisely, then all the claims of the paper fall apart.\\n\\nSeveral times claims like \\\"SGD reaches a region in which the SGD step matches ...\\\" Of course, the energy surface changes with training time, so it is a little unclear what is being said.\\n\\nThe main method Nudged-SGD sounds like a poor-mans second order method. Why not describe it as such (in more than a footnote and appendix), rather than introducing a new acronym. I don't know that I believe the \\\"key design principle\\\" in the appendix for second order methods. Second order methods rotate and stretch to take a locally-correct step length, and this method sounds like it is doing a poor mans version of that. There is a good question as to whether the \\\"thresholding\\\" into large and small that NSGD is doing causes it to do something very different, but that isn't really evaluated.\\n\\nAveraging over two random seeds is not a lot.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Great analyses about the relationship between the convergence/generalization and the update on largest eigenvectors of Hessian of the empirical loss.\", \"review\": \"Updated rating after author response from 8 to 7 because I agree that Figure 1 and some discussions were confusing in the original manuscript.\\n--------------------------------------------------------------------------\\n\\nThis paper investigates the relationship between the eigenvectors of the Hessian. This paper investigates characteristics of Hessian of the empirical losses of DNNs through comprehensive experiments. These experiments showed many important insights, 1) the top-K eigenvalues become bigger in the early stage, and decrease in later stage. 2) Bigger SGD steps and smaller batch-size leads to smaller and earlier peak of eigenvalues. 3) The sharpest direction update does not contribute to the loss value decrease in the normal step size (or bigger). From these analyses, this paper proposes to decrease the SGD step length on top-K eigenvectors for speeding up the convergence. Experimental results showed that the proposed method could converge to local minima in a fewer epoch and obtain better result, which means higher test accuracy.\\n\\nThis paper is well-written and well-organized. Findings about eigenvalues and these relationship between the SGD step length are very impressive. Although the step length adjustment on the top-K eigenvector directions are not realistic solution for improving the current SGD-based optimization on DNNs due to heavy computational cost, I think these findings and insights are very helpful to ICLR and other ML communities.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HklVTi09tm | Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks | [
"Ruoshi Liu",
"Michael M. Norton",
"Seth Fraden",
"Pengyu Hong"
] | Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena. A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase. Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows. Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified. Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells. Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate. It is expected to significantly increase the amount of video data that can be processed. | [
"active nematics",
"topological defects",
"protein motors",
"convolutional neural networks",
"active agents",
"energy",
"surroundings",
"momentum",
"variety"
] | https://openreview.net/pdf?id=HklVTi09tm | https://openreview.net/forum?id=HklVTi09tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BklkKs7tgV",
"HJgMRaSyTm",
"S1gsYybA3X",
"HygtIiI53Q"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545317238693,
1541524937783,
1541439363363,
1541200720630
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper791/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper791/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper791/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper791/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers raised a number of major concerns including the incremental novelty of the proposed (if any) and insufficient and unconvincing experimental evaluation presented. The authors did not provide any rebuttal. Hence, I cannot suggest this paper for presentation at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"An application of YOLO to detect topological defects in 2D active nematics.\", \"review\": \"Summary:\\nThis paper applies deep learning model YOLO to detect topological defects in 2D active nematics. Experimental results show that YOLO is robust and accurate, which outperforms traditional state-of-the-art defect detection methods significantly.\", \"pros\": [\"Detecting defects in 2D active nematics is an important task to study.\", \"YOLO is effective in object detection and shows good results for defect detection.\", \"The experiment shows that YOLO appears to outperform traditional state-of-the-art defect detection methods.\"], \"cons\": [\"The technical contribution seems not enough. YOLO is state-of-the-art object detection method and has been widely used. However, this paper directly applies YOLO for this task, while few variants have been specifically designed or modified for the defect detection tasks.\", \"The experiments may miss some details. For example, what is the traditional method used for comparison? What is the difference between traditional method and YOLO? The paper should provide some explanations and introductions.\", \"Since the training data set is imbalanced, does the proposed model utilize some strategy to overcome this problem?\", \"The detection rate comparison is not convincing. As shown in the experiments, traditional model and YOLO is operated by different machines, therefore, the detection rate comparison is not convincing.\", \"The paper contains some minors. For example, in table 1 and table 2, +1/2 defects should be -1/2.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A nice application of machine learning without much insight to machine learning practitioners.\", \"review\": \"In this paper the authors apply methods developed in computer vision towards the identification of topological defects in nematic liquid crystals. Typically, defects are identified using a costly algorithm that is based on numerically computing the winding number at different locations in the image to identify defects. The authors demonstrate that a deep learning approach offers improvement to both the identification accuracy and rate at which defects can be identified. Finally, the authors do some work investigating the limitations of the model and show that breakdown occurs near the edge of the field of view of the microscope. They show that this also happens with a conventional approach.\\n\\nOverall, this seemed like a nice application of machine learning to a subject that has received significant attention from soft matter community. The results appear to be carefully presented and the analysis seems solid. However, it does not seem to me that ICLR is a particularly appropriate venue for this work and it is unclear exactly what this paper adds to a discussion on machine learning. While there is nothing wrong with taking an existing architecture (YOLO) and showing that it can successfully be applied to another domain, it does limit the machine learning novelty. It also does not seem as though the authors pushed particularly hard in this direction. I would have been interested, for example, in seeing some analysis of the features learned by the architecture trained to classify defects appropriately.\\n\\nI would encourage the authors to either submit this work to a journal closer to soft matter or to do some work to determine what insights and lessons might help machine learning researchers working on other applied projects. The closest I got from the paper was the discussion of bounding box sizes and subclassification in section 3. It would have been nice to see some work discussing the dependence on this choice and what physical insights one might be able to glean from it.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Application of YOLO to images of specific types\", \"review\": \"This review will unfortunately be very short because I am afraid there is not much to say about this well written paper, which seems to have been sent to the wrong conference. The scientific problem is interesting, namely the detection of topological artifacts in images showing biological phenomena (which I don\\u2019t know much about). The relevant literature here is basically literature from this field, which is not machine learning and not even image processing. The contribution of the paper, in terms of machine learning, is to apply a well known neural model (YOLO) to detect bounding boxes of objects in images, which are very specific. The contribution here does not lie in machine learning, I am afraid.\\n\\nThis is thus a purely experimental paper on a single application, namely object detection in specific images. Unfortunately the experiments are not convincing. The results are validated against a \\u201ctraditional method\\u201d, which has never been cited, so we do not know what it is.\\n\\nThe performance gain obtained with YOLO seems to be minor, although the difference in time complexity is quite enormous (to the advantage of YOLO).\\n\\nThe contribution is thus minor and for me does not justify publication at ICLR.\\n\\nThe grant number is mentioned in the acknowledgments, which seems to violate double blind policy.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HJMXTsCqYQ | Constrained Bayesian Optimization for Automatic Chemical Design | [
"Ryan-Rhys Griffiths",
"José Miguel Hernández-Lobato"
] | Automatic Chemical Design provides a framework for generating novel molecules with optimized molecular properties. The current model suffers from the pathology that it tends to produce invalid molecular structures. By reformulating the search procedure as a constrained Bayesian optimization problem, we showcase improvements in both the validity and quality of the generated molecules. We demonstrate that the model consistently produces novel molecules ranking above the 90th percentile of the distribution over training set scores across a range of objective functions. Importantly, our method suffers no degradation in the complexity or the diversity of the generated molecules. | [
"Bayesian Optimization",
"Generative Models"
] | https://openreview.net/pdf?id=HJMXTsCqYQ | https://openreview.net/forum?id=HJMXTsCqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rylyDskZeN",
"S1gUeQYchm",
"rJlRry6x2Q",
"Syl_iuMYom"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544776535091,
1541210861535,
1540570950266,
1540069535967
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper790/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper790/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper790/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper790/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes to use constrained Bayesian optimization to improve the chemical compound generation. Unfortunately, the reviewers raises a range of critical issues which are not responded by authors' rebuttal.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"clear rejection; no rebuttal\"}",
"{\"title\": \"Insufficient presentations and evidences to conclude the improvement\", \"review\": \"Summary:\\nThis paper proposes a novel method for generating novel molecules with some targeted properties. Many studies on how to generate chemically valid molecular graphs have been done, but it is still an open problem due to the essential difficulty of generating discrete structures from any continuous latent space. From this motivation, the 'constrained' Bayesian optimization (BO) is applied and analyzed. Posing 'constraints' on the validity is realized by probability-weighting onto the expected improvement scores in BO. The 'validity' probability is learned beforehand by Bayesian neural nets in a supervised way. As empirical evaluations, two case studies are presented, and quality improvements of generated molecules are observed.\", \"comment\": [\"The presentation would be too plain to find what parts are novel contributions. Every part of presentations seems originated from some past studies at the first glance.\", \"In this paper, how to pose the validity 'constraint' onto Bayesian optimization would be the main concern. Thus if it is acquired through supervised learning of Bayesian neural nets in advance, that part should be explained more in details. How do we collect or setup the training data for that part? Is it valid to apply such trained models to the probability weighting P(C(m)) on EI criteria in the test phase? Any information leakage does not happen?\", \"The implementations of constrained BO is just directly borrowed from Gelbart, 2015 including parallel BO with kriging believer heuristics? The description on the method is totally omitted and would need to be included.\", \"How training of Bayesian neural nets for 'Experiment II' are performed? What training datasets are used? Is it the same as those for 'Experiment I' even though the target and problem are very different?\"], \"pros\": [\"a constrained Bayesian optimization with weighing EI by the probabilities from pre-trained Bayesian neural nets applied to the hot topic of valid molecule generations.\", \"Experiments observe the quality improvements\"], \"cons\": [\"unclear and insufficient descriptions of the method and the problem\", \"novel contributions are unclear\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novelty is limited. No comparison with SOTA models\", \"review\": \"This paper proposes to improve the chemical compound generation by the Bayes optimization strategy, not by the new models.\\nThe main proposal is to use the acquisition that switches the function based on the violation of a constraint, estimated via a BNN. \\n\\nI understand that the objective function, J_{comp}^{QED} is newly developed by the authors, but not intensively examined in the experiments. \\nThe EIC, and the switching acquisition function is developed by (Schonlau+ 1998; Gelbard, 2015). \\nSo I judge the technical novelty is somewhat limited. \\n\\nIt is unfortunate that the paper lacks intensive experimental comparisons with \\\"model assumption approaches\\\". \\nMy concern is that the baseline is rely on the SMILE strings. \\nIt is well known that the string-based generators are much weaker than the graph-based generators. \\nIn fact, the baseline model is referred as \\\"CVAE\\\" in (Jing+, 2018) and showed very low scores against other models. \\n\\nThus, we cannot tell that these graph-based, \\\"model-assumption\\\" approaches are truly degraded in terms of the validity and the variety of generated molecules, \\ncompared to those generated by the proposed method. \\nIn that sense, preferable experimental setting is that to \\ntest whether the constrained Bayesian optimization can boost the performance of the graph-based SOTA models. \\n\\n\\n+ Showing that we can improve the validity the modification of the acquisition functions\\n- Technical novelty is limited. \\n- No comparison with SOTA models in \\\"graph-based, model assumption approaches\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"lack of empirical evidence on being able to do sequential selections\", \"review\": \"The authors proposed a new method improving a previous Bayesian optimization approach for chemical design (Gomez-Bombarelli et al., 2016b) by addressing the problem that data points need to have valid molecular structures. The main contribution is a constrained Bayesian optimization approach that take into account the constraint on the probability of being valid.\\n\\nMy biggest concern of this paper is that it is not really using sequential evaluations to do automatic design of experiments on molecules. The experiments do not seem to fully support the proposed approach in terms of being able to adaptively do sequential evaluations.\", \"detailed_comments\": \"1. The term inside the expectation of the EI criterion in Sec 2.5 should be max(0, f(m)-\\\\eta) rather than max(0, \\\\eta - f(m)).\\n2. The EIC criterion the authors adopted uses Pr(C(m)) if the constraint is violated everywhere with high probability. It seems like Pr(C(m)) does not have the ability to explore regions with high uncertainty. How does this approach compare to Bayesian level set estimation approaches like \\nB. Bryan, R. C. Nichol, C. R. Genovese, J. Schneider, C. J. Miller, and L. Wasserman, \\u201cActive learning for identifying function threshold boundaries,\\u201d in NIPS, 2006 \\nI. Bogunovic, J. Scarlett, A. Krause, and V. Cevher, \\u201cTruncated variance reduction: A unified approach to bayesian optimization and level-set estimation,\\u201d in NIPS, 2016.\\n3. It would be good to explain in more detail how a constraint is labeled to be valid or invalid. \\n4. What is the space of m in Sec 2.3 and the space of m in Sec 2.4? It seems like there is a discrepancy: Sec 2.3 is talking about m as a molecule but Sec 2.4 describes f as a function on the latent variable? It would be good to be clear about it.\\n5. In the experiment, I think it is very necessary to show the effectiveness of the constrained BO approach in terms of how the performance varies as a function of the number of evaluations on the constraint and the function. The current empirical results only support the claim that the constrained BO approach is able to output more valid latent variables and the function values from constrained BO is higher than vanilla BO under the same number of training data. It is also strange why there is a set of test data. \\n\\n\\nTypos/format:\\n1. citation format problems across the paper, e.g. \\n\\\"Gomez-Bombarelli et al. (Gomez-Bombarelli et al., 2016b) presented Automatic Chem\\\"\\n\\\"SMILES strings Weininger (1988) are a means of representing molecules as a character sequence.\\\"\\nIt's likely a problem of misuse of \\\\cite, \\\\citep.\\n2. no period at the end of Sec 2.4\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1MQ6jCcK7 | ChoiceNet: Robust Learning by Revealing Output Correlations | [
"Sungjoon Choi",
"Sanghoon Hong",
"Kyungjae Lee",
"Sungbin Lim"
] | In this paper, we focus on the supervised learning problem with corrupt training data. We assume that the training dataset is generated from a mixture of a target distribution and other unknown distributions. We estimate the quality of each data by revealing the correlation between the generated distribution and the target distribution. To this end, we present a novel framework referred to here as ChoiceNet that can robustly infer the target distribution in the presence of inconsistent data. We demonstrate that the proposed framework is applicable to both classification and regression tasks. Particularly, ChoiceNet is evaluated in comprehensive experiments, where we show that it constantly outperforms existing baseline methods in the handling of noisy data in synthetic regression tasks as well as behavior cloning problems. In the classification tasks, we apply the proposed method to the MNIST and CIFAR-10 datasets and it shows superior performances in terms of robustness to different types of noisy labels. | [
"Robust Deep Learning",
"weakly supervised learning"
] | https://openreview.net/pdf?id=S1MQ6jCcK7 | https://openreview.net/forum?id=S1MQ6jCcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1g-3gNfe4",
"H1lLgc8YCm",
"B1xx0YLtAm",
"BylM8YLYRQ",
"B1eqVtIKRm",
"Syxsn_8KRQ",
"rklBiO8YAQ",
"BkxWU_UtR7",
"B1eAxJHGa7",
"ByeZ3mlo3m",
"SyxB2-Bd3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544859817209,
1543231981903,
1543231944424,
1543231818463,
1543231794178,
1543231666784,
1543231645164,
1543231561321,
1541717749704,
1541239721222,
1541063085009
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper789/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/Authors"
],
[
"ICLR.cc/2019/Conference/Paper789/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper789/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper789/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses an interesting problem (learning in the presence of noisy labels) and provides extensive experiments. However, while the experiments in some sense cover a good deal of ground, reviewers raised issues with their quality, especially concerning baselines and depth (in terms of realism of the data). The authors provided many additional experiments during the rebuttal, but the reviewers did not find them sufficiently convincing.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"good direction but experiments lacking in some respects\"}",
"{\"title\": \"Modification on related work + more experiments (1/2)\", \"comment\": \"We appreciate the reviewer for the valuable reviews.\\n\\n1. Related work: We admit that the current manuscript lacks comprehensive curation of related work. We rewrote the whole related work section and categorized existing work into four groups and try to compare them in a more principled way. Please refer to the revised manuscript. \\n\\n2. Experiments: Following the review, we conducted three additional experiments: a) more baselines (MentorNet and VAT), b) using both symmetric and asymmetric noisy data, and c) using an NLP dataset.\\n\\na). More baselines to current CIFAR-10 experiments: We implemented MentorNet [1] and VAT [2] to better evaluate the performance of the proposed method on current CIFAR-10 setting. \\n\\ncorruption rate 20% 50% 80%\\n----------------------------------------------\\nMentorNet PD 64.0% 49.0% 21.4%\\nMentorNet DD 62.0% 43.1% 21.8%\\nVAT 82.0% 71.6% 16.9%\\n----------------------------------------------\\nCN 90.3% 84.6% 65.2%\\nCN+Mixup 92.3% 87.9% 75.4% \\n\\nIn all cases, the proposed methods (CN and CN+Mixup) outperforms the baselines. \\n\\n\\nb) Asymmetric noise experiments following Co-teaching [3]. We implement the 9-layer CNN architecture following VAT [2] and Co-teaching [3] to fairly evaluate the performance of CIFAR10 experiments with both symmetric and asymmetric noise settings: Pair-45%, Symmetry-50%, and Symmetry-20%, using the authors\\u2019 implementations available on github. We also set other configurations such as having no data augmentation and activation functions to be the same as [3]. \\n\\n(Single-run, last validation accuracy)\\n Pair-45% sym-50% sym-20% \\n------------------------------------------\\nChoiceNet 70.3% 85.2% 91.0%\\n------------------------------------------\\nMentorNet 58.14% 71.10% 80.76%\\nCo-teaching 72.62% 74.02% 82.32%\\nF-correction 6.61% 59.83% 84.55%\\n\\nThe results of MentorNet [1], Co-teaching [3], and F-correction [4] are copied from [3]. While our proposed method outperforms all compared methods on symmetric noise settings, it shows inferior performances to Co-teaching. This shows the weakness of the proposed method. In other words, our mixture distribution failed to correctly infer the dominant distribution which shows the weakness of the mixture-based method. However, we would like to note that Co-teaching [3] is complementary to our method where one can combine these two methods by using two ChoiceNets and update each network using Co-teaching. \\n\\nc) Natural language processing experiments: We used a Large Movie Review Dataset consist of 25,000 movie reviews for training and 25,000 reviews for testing. Each movie review (sentences) is mapped to a 128-dimensional feature vector using feed-forward Neural-Net Language Models [5] and we tested the robustness of the proposed method, mix-up, and naive MLP baseline by randomly flipping the labels. \\n\\nrandom flip rate 0% 10% 20% 30% 40%\\n-------------------------------------------------------------\\nChoiceNet 79.43% 79.50% 78.66% 77.10% 73.98%\\nMix-up 79.77% 78.73% 77.58% 75.85% 69.63%\\nBaseline (MLP) 79.04% 77.88% 75.70% 69.05% 62.83%\\nVAT 76.40% 72.50% 69.20% 65.20% 58.30%\"}",
"{\"title\": \"Modification on related work + more experiments (2/2)\", \"comment\": \"3. Motivation:\\n\\nAs reviewer 1 and 3 pointed out, the manuscript requires more explanations regarding the proposed methods, Cholesky transform and MCDN block. Let us brie\\ufb02y explain the motivations (backgrounds) and the practical meanings of the proposed methods. We will add them to the revised version. (We didn\\u2019t modify this part of the manuscript yet).\\n\\n1. To handle noisy data, we reveal the quality of each data using the notion of correlation between output features. Specifically, we model the data to be collected from a mixture of a target distribution p(y|x) and other irrelevant distributions q(y|x). We quantify the irrelevancy (or independency) by correlation \\u03c1 between p(y|x) and q(y|x) where \\u03c1 \\u2208 [\\u22121, 1]. Intuitively speaking, corrupted data will be modeled to be collected from a class of q(y|x) with small rho e.g. \\u03c1 = 0.\\n\\n2. We model the target conditional distribution p(y|x) using a parametrized distribution with expected measurement variance \\u03c4 \\u22121 , i.e., p(y|x; \\u03b8) = N(y; f \\u03b8 (x), \\u03c4 \\u22121 ) where f \\u03b8 (\\u00b7) is a neural network and \\u03b8 is a set of parameters including \\u00b5_W and \\u03a3_W . The Cholesky transform is proposed to construct a \\u03c1-correlated conditional distribution using \\u03b8 and \\u03c1. In other words, the correlation between q(y|x; \\u03b8) (constructed from the Cholesky transform) and p(y|x; \\u03b8) is \\u03c1.\\n\\n3. Now, we can construct a mixture model of the target distribution, p(y|x; \\u03b8), and other distributions, q \\u03c1 (y|x; \\u03b8) parametrized by \\u03b8 and the correlation parameter \\u03c1. However, we still need to assess the quality (correlation) of each data point. Since the correlation information is not explicitly given, we model the correlation of each data to be a function of an input x, i.e., \\u03c1 \\u03c6 (x), parametrized by \\u03c6 and jointly optimize \\u03c6 and \\u03b8 using a mixture distribution. The mixture of correlated density network (MCDN) block is proposed for this purpose.\\n\\n\\n[1] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n[2] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n[3] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama, \\u201cCo-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels\\u201d, NIPS, 2018. \\n[4] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n[5] Y. Bengio, R. Ducharme, P. Vincent, C. Jauvin. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137-1155, 2003.\"}",
"{\"title\": \"More experiments + modified related work + limitations (1/2)\", \"comment\": \"We thank the reviewer for the valuable comments, especially the suggestions regarding the related work. We admit that the current explanation about the proposed method is not straightforward and has some rooms for the improvements. Followings are the motivation of the proposed method and we will revise the manuscript so that the readers can better understand the concept more easily. (We didn\\u2019t modify this part of the manuscript yet).\\n\\n1. To handle noisy data, we reveal the quality of each data using the notion of correlation between output features. Specifically, we model the data to be collected from a mixture of a target distribution p(y|x) and other irrelevant distributions q(y|x). We quantify the irrelevancy (or independency) by correlation \\u03c1 between p(y|x) and q(y|x) where \\u03c1 \\u2208 [\\u22121, 1]. Intuitively speaking, corrupted data will be modeled to be collected from a class of q(y|x) with small rho e.g. \\u03c1 = 0.\\n\\n2. We model the target conditional distribution p(y|x) using a parametrized distribution with expected measurement variance \\u03c4 \\u22121 , i.e., p(y|x; \\u03b8) = N(y; f \\u03b8 (x), \\u03c4 \\u22121 ) where f \\u03b8 (\\u00b7) is a neural network and \\u03b8 is a set of parameters including \\u00b5_W and \\u03a3_W . The Cholesky transform is proposed to construct a \\u03c1-correlated conditional distribution using \\u03b8 and \\u03c1. In other words, the correlation between q(y|x; \\u03b8) (constructed from the Cholesky transform) and p(y|x; \\u03b8) is \\u03c1.\\n\\n3. Now, we can construct a mixture model of the target distribution, p(y|x; \\u03b8), and other distributions, q \\u03c1 (y|x; \\u03b8) parametrized by \\u03b8 and the correlation parameter \\u03c1. However, we still need to assess the quality (correlation) of each data point. Since the correlation information is not explicitly given, we model the correlation of each data to be a function of an input x, i.e., \\u03c1 \\u03c6 (x), parametrized by \\u03c6 and jointly optimize \\u03c6 and \\u03b8 using a mixture distribution. The mixture of correlated density network (MCDN) block is proposed for this purpose.\\n\\nFollowing the reviewer\\u2019s comments, we conducted additional regression experiments using the Boston housing dataset. Here, we used the Boston housing price dataset and checked the robustness of the proposed method and compared with standard multi-layer perceptrons with four different types of loss functions: standard L2-loss, L1-loss which is known to be robust to outliers, a robust loss function proposed in [1], and a leaky robust function extending [1]. We further implement the leaky version of [1] in that the original loss function with Tukey\\u2019s biweight function discards the instances whose residuals exceed certain threshold. \\n\\nOutlier rate \\t0% 5% 10% 15% 20% 30% 40% 50%\\n-----------------------------------------------------------------------\\nChoiceNet 3.29 3.71 3.99 4.45 4.77 5.94 6.80 9.00\\nL2 loss 3.22 4.61 5.97 6.65 7.51 9.04 9.88 10.92\\nL1 loss 3.26 4.36 5.72 6.61 7.16 8.65 9.69 10.33\\nRobust loss 4.28 4.63 6.36 6.59 8.08 10.54 10.94 11.96\\nLeaky Robust 3.36 4.51 5.71 6.54 7.08 8.67 9.68 10.46\\n\\nWe also modify the naming convention in the experiment section, e.g., ConvNet+CN+Mixup.In fact, we believe this naming convention can help understanding the benefit of the proposed method. In fact, it can be combined with other methods for achieving robustness such as mixup [2] or co-teaching [3] as these methods are compatible with our method. \\nWe rewrote the related work section to better categorize existing and current studies and added [4] to the related work.\"}",
"{\"title\": \"More experiments + modified related work + limitations (2/2)\", \"comment\": \"We conducted additional experiments based on other reviews where we observe that the proposed method show superior performance to symmetric noises but vulnerable to asymmetric noise on CIFAR-10 following the settings in [3]. We implement the 9-layer CNN architecture following VAT [5] and Co-teaching [3] to fairly evaluate the performance of CIFAR10 experiments with both symmetric and asymmetric noise settings: Pair-45%, Symmetry-50%, and Symmetry-20%, using the authors\\u2019 implementations available on github. Pair-45% flips 45% of each label to the next label. For example, randomly flipping 45% of label 1 to label 2 and label 2 to label3. On the other hand, Symmetriy-50% randomly assigns 50% of each label to other labels uniformly. For example, Symmetriy-50% randomly flips 50% the labels of instances whose original label is 1 to a random label sampled from 2-10.\\n\\nWe set other configurations such as the network topology and an activation functions to be the same as [3]. \\n\\n(Single-run, last validation accuracy)\\n Pair-45% sym-50% sym-20% \\n------------------------------------------\\nChoiceNet 70.3% 85.2% 91.0%\\n------------------------------------------\\nMentorNet 58.14% 71.10% 80.76%\\nCo-teaching 72.62% 74.02% 82.32%\\nF-correction 6.61% 59.83% 84.55%\\n\\nThe results of MentorNet [6], Co-teaching [3], and F-correction [7] are copied from [3]. While our proposed method outperforms all compared methods on symmetric noise settings, it shows inferior performances to Co-teaching. This shows the weakness of the proposed method. In other words, our mixture distribution failed to correctly infer the dominant distribution which shows the weakness of the mixture-based method. However, we would like to note that Co-teaching [5] is complementary to our method where one can combine these two methods by using two ChoiceNets and update each network using Co-teaching. \\n\\n* We also conducted additional experiments to show the strength of the proposed method. \\n\\na). More baselines to current CIFAR-10 experiments: We implemented MentorNet [6] and VAT [5] to better evaluate the performance of the proposed method on current CIFAR-10 setting. \\n\\ncorruption rate 20% 50% 80%\\n----------------------------------------------\\nMentorNet PD 64.0% 49.0% 21.4%\\nMentorNet DD 62.0% 43.1% 21.8%\\nVAT 82.0% 71.6% 16.9%\\n----------------------------------------------\\nCN+Mixup 92.3% 87.9% 75.4% \\n\\nb) Natural language processing experiments: We used a Large Movie Review Dataset consist of 25,000 movie reviews for training and 25,000 reviews for testing. Each movie review (sentences) is mapped to a 128-dimensional feature vector using feed-forward Neural-Net Language Models [8] and we tested the robustness of the proposed method, mix-up, and naive MLP baseline by randomly flipping the labels. \\n\\nrandom flip rate 0% 10% 20% 30% 40%\\n-------------------------------------------------------------\\nChoiceNet 79.43% 79.50% 78.66% 77.10% 73.98%\\nMix-up 79.77% 78.73% 77.58% 75.85% 69.63%\\nBaseline (MLP) 79.04% 77.88% 75.70% 69.05% 62.83%\\nVAT 76.40% 72.50% 69.20% 65.20% 58.30%\\n\\nSimilar to regression experiments, ChoiceNet shows the superior performance in the presence of outliers where we observe that the proposed method can be used for NLP tasks as well. \\n\\n[1] V. Belagiannis, C. Rupprecht, G, Carneiro, N. Navab, \\\"Robust Optimization for Deep Regression\\\", ICCV, 2015\\n[2] H. Zhang, M. Cisse, Y. Dauphin, D. Lopez-Paz, \\u201cmixup: Beyond Empirical Risk Minimization\\u201c, ICLR, 2018.\\n[3] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama, \\u201cCo-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels\\u201d, NIPS, 2018. \\n[4] Platanios, E. Antonios, A. Dubey, and T. Mitchell. \\\"Estimating accuracy from unlabeled data: A bayesian approach.\\\" International Conference on Machine Learning. 2016.\\n[5] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n[6] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n[7] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n[8] Y. Bengio, R. Ducharme, P. Vincent, C. Jauvin. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137-1155, 2003.\"}",
"{\"title\": \"More experiments (robust regression, nlp, more baselines, and both symmetric and asymmetric noisy datasets) 1/2\", \"comment\": \"We thank the reviewer for the helpful comments. Especially, we agree that more in-depth experiments would be helpful for convincing the strength of the proposed method. In this regards, we conducted four additional experiments: a) robust regression experiments using a real-world dataset, b) experiments on NLP tasks, c) more baselines (MentorNet and VAT) on current CIFAR-10 experiments, and d) experiments with both symmetric and asymmetric following the recent work [1].\\n\\na). Robust regression experiments: Here, we used the Boston housing price dataset and checked the robustness of the proposed method and compared our method with standard MLPs with four different types of loss functions: standard L2-loss, L1-loss which is known to be robust to outliers, a robust loss function proposed in [2], and a leaky robust function extending [2]. We implement the leaky version in that the Tukey\\u2019s biweight function discards the instances whose residuals exceed a certain threshold. Two-layer MLPs with 128 units and a relu activation is used for all scenarios. We vary the outlier ratio from 0% to 40% where the outputs of the outliers are uniformly sampled within the minimum and the maximum values of the training outputs. The results are as follows:\\n\\noutlier rate \\t0% 5% 10% 15% 20% 30% 40% 50%\\n-----------------------------------------------------------------------\\nChoiceNet 3.29 3.71 3.99 4.45 4.77 5.94 6.80 9.00\\nL2 loss 3.22 4.61 5.97 6.65 7.51 9.04 9.88 10.92\\nL1 loss 3.26 4.36 5.72 6.61 7.16 8.65 9.69 10.33\\nRobust loss 4.28 4.63 6.36 6.59 8.08 10.54 10.94 11.96\\nLeaky Robust 3.36 4.51 5.71 6.54 7.08 8.67 9.68 10.46\\n\\nThe proposed method (ChoiceNet) outperforms all compared methods in the presence of outliers and shows a comparable performance without the outlier. \\n\\n2. Natural language processing experiments: We used a Large Movie Review Dataset consist of 25,000 movie reviews for training and 25,000 reviews for testing. Each movie review (sentences) is mapped to a 128-dimensional embedding vector using feed-forward Neural-Net Language Models [3] and we tested the robustness of the proposed method, mix-up [4], VAT [5], and naive MLP baseline by randomly flipping the labels. In all experiments, we used two-layer MLPs with 128 hidden units and ReLU activations. \\n\\nrandom flip rate 0% 10% 20% 30% 40%\\n-------------------------------------------------------------\\nChoiceNet 79.43% 79.50% 78.66% 77.10% 73.98%\\nMix-up 79.77% 78.73% 77.58% 75.85% 69.63%\\nBaseline (MLP) 79.04% 77.88% 75.70% 69.05% 62.83%\\nVAT 76.40% 72.50% 69.20% 65.20% 58.30%\\n\\nSimilar to regression experiments, ChoiceNet shows the superior performance in the presence of outliers where we observe that the proposed method can be used for NLP tasks as well. \\n\\n3. More baselines to current CIFAR-10 experiments: We compared MentorNet [6] and VAT [5] to better evaluate the performance of the proposed method on the current CIFAR-10 setting. For MentorNet, we compare two methods: MentorNet PD which uses the pre-de\\ufb01ned curriculum to train StudentNet and the other, MentorNet DD, which uses data-driven curriculum to train StudentNet where we use Resent-101 for the StudentNet following the author\\u2019s implementations. We would like to note that the base networks of the StudentNet are not the same as the one we used for ChoiceNet, but even bigger. Due to the limited time for tuning the hyper-parameters, we simply used the existing implementations while changing the train and test dataset. To measure the robustness of compared methods, we vary the corruption probabilities from 20% to 80% and the results are as follows where the results of CN and CN+Mixup are copied from the current manuscript. We're working on reproducing MentorNet and VAT on the exact same network architecture.\\n\\ncorruption rate 20% 50% 80%\\n----------------------------------------------\\nMentorNet PD 64.0% 49.0% 21.4%\\nMentorNet DD 62.0% 43.1% 21.8%\\nVAT 82.0% 71.6% 16.9%\\n----------------------------------------------\\nCN 90.3% 84.6% 65.2%\\nCN+Mixup 92.3% 87.9% 75.4% \\n\\nIn all cases, the proposed methods (CN and CN+Mixup) outperforms the baselines.\"}",
"{\"title\": \"More experiments (robust regression, nlp, more baselines, and both symmetric and asymmetric noisy datasets) 2/2\", \"comment\": \"4. Asymmetric noise experiments following Co-teaching [1]. We implement the 9-layer CNN architecture following VAT [5] and Co-teaching [1] to fairly evaluate the performance of CIFAR10 experiments with both symmetric and asymmetric noise settings: Pair-45%, Symmetry-50%, and Symmetry-20%. Pair-45% flips 45% of each label to the next label. For example, randomly flipping 45% of label 1 to label 2. We used the authors\\u2019 implementations available on the github for generating the corrupted datasets. We also set configurations such as network topology (except for adding a MCDN layer instead of a linear layer), learning rate, optimizer max epoch to be the same as [1].\\n\\n(Single-run, last validation accuracy)\\n Pair-45% sym-50% sym-20% \\n------------------------------------------\\nChoiceNet 70.3% 85.2% 91.0%\\n------------------------------------------\\nMentorNet 58.14% 71.10% 80.76%\\nCo-teaching 72.62% 74.02% 82.32%\\nF-correction 6.61% 59.83% 84.55%\\n\\nThe results of MentorNet [6], Co-teaching [1], and F-correction [7] are copied from [1]. While our proposed method outperforms all compared methods on symmetric noise settings, it shows inferior performances to Co-teaching. This shows the weakness of the proposed method. In other words, our mixture distribution failed to correctly infer the dominant distribution which shows the weakness of the mixture-based method. However, we would like to note that Co-teaching [1] is complementary to our method where one can combine these two methods by using two ChoiceNets and update each network using Co-teaching. \\n\\n- We did not conduct CIFAR-100 experiments due to the limited time and computation resources available. However, we plan to do additional experiments following the settings from Co-teaching [1].\", \"responses_to_nitpicks\": \"-In the related works we are told that a smaller learning rate can improve label corruption robustness. They train their method with a learning rate of 0.001; the baseline gets a learning rate of 0.1. \\n=> The learning rate of 0.001 is only applied for the first epoch and the base learning rate of 0.1 is applied afterward. This technique is often called 'warming-up'. We will modify the manuscript so that there's no confusion about this [8,9].\\n\\n-The larger-than-usual batch size is 256 for their 22-4 Wide ResNets, and at the same time they do not use dropout (standard for WRNs of this width) and use less weight decay than is common. Is this because of mixup? If so why is the weight decay two orders of magnitude less for your approach compared to the baseline? How were these various atypical parameters chosen?\\n=> The optimal hyper-parameters of the proposed ChoiceNet varies from the standard resnet in that the way we train the network is totally different. The manual tuning of the hyper-parameters of both our method and baseline methods are automatically selected from the blackbox optimization method using a separate validation set. \\n\\n-They also use gradient clipping for their method, which is extremely rare for CIFAR-10 classification. Why is this necessary?\\n=> The gradient clipping is used to stabilize training. The main reason is that the proposed method first \\u2018samples\\u2019 a set of weights of the network and use the sampled parameters for inference. This seldom causes instability in the training phase.\\n\\n-This document could be cleaner by eschewing the Theorem of this paper, which \\\"states that a correlation between two random matrices is invariant to an affine transform.\\\" For this audience, I suspect this theorem is unnecessary. Likewise the three lines expended for the maths of a Gaussian probability density function could probably be used for other parts of this paper.\\n=> We agree that current paper only uses a Gaussian prior distribution over the weight matrices. However, the theorem itself does not assume the Gaussian distribution. In fact, any centered distributions such as Gaussian or Laplacian can be used to model the weight matrices. \\n\\nOther typos will be modified in the revised manuscript.\"}",
"{\"title\": \"References\", \"comment\": \"[1] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, Ivor W. Tsang, M. Sugiyama, \\u201cCo-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels\\u201d, NIPS, 2018.\\n[2] V. Belagiannis, C. Rupprecht, G. Carneiro, N. Navab, \\\"Robust Optimization for Deep Regression\\\", ICCV, 2015\\n[3] Y. Bengio, R. Ducharme, P. Vincent, C. Jauvin. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137-1155, 2003.\\n[4] H. Zhang, M. Cisse, Y. Dauphin, D. Lopez-Paz, \\u201cmixup: Beyond Empirical Risk Minimization\\u201c, ICLR, 2018.\\n[5] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n[6] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n[7] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n[8] P. Goyal, P. Doll\\u00e1r, R. Girshick, P. Noordhuis, \\u201cAccurate, Large Minibatch SGD: Training ImageNet in 1 Hour\\u2019, ArXiv, 2018. \\n[9] K. He, et al. \\\"Deep residual learning for image recognition.\\u201d, CVPR, 2016.\"}",
"{\"title\": \"Interesting Approach with Insufficient Results\", \"review\": \"This paper presents an apparently original method targeted toward models training in the presence of low-quality or corrupted data. To accomplish this they introduce a \\\"mixture of correlated density network\\\" (MCDN), which processes representations from a backbone network, and the MCDN models the corrupted data generating process. Evaluation is on a regression problem with an analytic function, two MuJoCo problems, MNIST, and CIFAR-10.\\n\\nThis paper's primary strength is that the proposed method is a tool quite distinct from recent work, in that it does not use bootstrapping or solely use corruption transition matrices. The paper is typeset well. In addition to this, the experimentation has unusual breadth.\\n\\nHowever, the synthetic regression task is a nice proof-of-concept, but thorough regression evaluation could perhaps include the Boston Housing Prices dataset or some UCI datasets.\\n\\nThe hamartia of this paper is that it does not provide sufficient depth in its computer vision experiments. For one, experimentation on CIFAR-100 would be appreciated.\\nIn the CIFAR-10 experiments, they consider one label corruption setting and lack experimentation on uniform label corruptions.\\nThe related works has thorough coverage on label corruption, but these works do not appear in the experiments. They instead compare their label corruption technique to mixup, a general-purpose network regularizer. It is not clear why it is thought the \\\"state-of-the-art technique on noisy labels\\\"; this may be true among network regularization approaches (such as dropout) but not among label correction techniques. For this problem I would expect comparison to at least three label correction techniques, but the comparison is to one technique which was not primarily designed for label corruption.\", \"nitpicks\": \"-In the related works we are told that a smaller learning rate can improve label corruption robustness. They train their method with a learning rate of 0.001; the baseline gets a learning rate of 0.1.\\n-The larger-than-usual batch size is 256 for their 22-4 Wide ResNets, and at the same time they do not use dropout (standard for WRNs of this width) and use less weight decay than is common. Is this because of mixup? If so why is the weight decay two orders of magnitude less for your approach compared to the baseline? How were these various atypical parameters chosen?\\n-They also use gradient clipping for their method, which is extremely rare for CIFAR-10 classification. Why is this necessary?\\n-This document could be cleaner by eschewing the Theorem of this paper, which \\\"states that a correlation between two random matrices is invariant to an affine transform.\\\" For this audience, I suspect this theorem is unnecessary. Likewise the three lines expended for the maths of a Gaussian probability density function could probably be used for other parts of this paper.\\n-\\\"a leverage optimization method which optimizes the leverage of each demonstrations is proposed. Unlike to former study,\\\" -> \\\"a leverage optimization method which optimizes the leverage of each demonstration is proposed. Unlike a former study,\\\"\\n-\\\"In the followings,\\\" -> \\\"In the following,\\\"\", \"edit\": \"The updated results need consistent baselines. For example, the method of [7] should be consistently compared against.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting Approach with Nice Results\", \"review\": \"The paper presents a framework, called ChoiceNet, for learning when the\\nsupervision outputs (e.g., labels) are corrupted by noise. The method relies on \\nestimating the correlation between the training data distribution and a \\ntarget distribution, where training data distribution is assumed to be a mixture \\nof that target distribution and other unknown distributions. The paper also \\npresents some compelling results on synthetic and real datasets, for both \\nregression and classification problems.\\n\\nThe proposed idea builds on top of previously published work on Mixture Density \\nNetworks (MDNs) and Mixup (Zhang et al, 2017). The main difference is the MDN \\nare modified to construct the Mixture of Correlated Density Network (MCDN) \\nblock, that forms the main component of ChoiceNets.\\n\\nI like the overall direction and idea of modelling correlation between the \\ntarget distribution and the data distribution to deal with noisy labels. The \\nresults are also compelling and I thus lean towards accepting this paper. My \\ndecision on \\\"marginal accept\\\" is based primarily on my unfamiliarity with this \\nspecific area and that some parts of the paper are not very easy or intuitive \\nto read through.\\n\\n== Related Work ==\\n\\nI like the related work discussion, but would emphasize more the connection to \\nMDNs and to Mixup. Only one sentence is mentioned about Mixup but reading \\nthrough the abstract and the introduction that is the first paper that came to \\nmy mind and thus I believe that it may deserve a bit more discussion.\\n\\nAlso, there are a couple more papers that felt relevant to this work but are\", \"not_mentioned\": \"- Estimating Accuracy from Unlabeled Data: A Bayesian Approach, Platanios et al., ICML 2016.\\n I believe this is related in how noisy labels are modeled (i.e., section 3 \\n in the reviewed paper) and in the idea of correlation/consistency as a means \\n to detect errors. There are couple more papers in this line of work that \\n may be relevant.\\n - ADIOS: Architectures Deep In Output Space, Al-Shedivat et al., ICML 2016.\\n I believe this is related in learning some structure in the output space, \\n even though not directly dealing with noisy labels.\\n\\n== Method ==\\n\\nI believe the methods section could have been written in a more \\nclear/easy-to-follow way, but this may also be due to my unfamiliarity with this \\narea. Figure 1 is hard to parse and does not really offer much more than section \\n3.2 currently does. If the figure is improved with some more text/labels on \\nboxes rather than plain equations, it may go a long way in making the methods \\nsection easier to follow.\\n\\nI would also point out MCDN as the key contribution of this paper as ChoiceNet \\nis just any base network with an MCDN block stacked on top of this. Thus, I \\nbelieve this should be emphasized more to make your key contribution clear.\\n\\n== Experiments ==\\n\\nThe experiments are nicely presented and are quite thorough. A couple minor\", \"comments_i_have_are\": \"- It would be nice to run regression experiments for bigger real-world \\n datasets, as the ones used seem to be quite small.\\n - I am a bit confused at the fact that in table 3 you compare your method to \\n mixup and in table 4 you also show results when using both your method and \\n mixup combined. Up until that point I thought that mixup was posed as an \\n alternative method, but here it seems it's quite orthogonal and can be used \\n together, which I think makes sense, but would be good to clarify. Also, \\n given that you show combined results in table 4, why not also perform \\n exactly the same analysis for table 3 and also show numbers for CN + Mixup?\\n\\nIt would also be nice to use the same naming scheme for both tables. I would\", \"use\": \"ConvNet, ConvNet + CN, ConvNet + CN + Mixup, and the same with WRN for\\ntable 4. This would make the tables easier to read because currently the first \\nthing that comes to mind is what may be different between the two setups given \\nthat they are presented side-by-side but use different naming conventions.\\n\\nOne question that comes to mind is that you make certain assumptions on the \\nkinds of noise your model can capture, so are there any cases where you have \\ngood intuition as to why your model may fail? It would be good to present a \\nshort discussion on this to help readers understand whether they can benefit by \\nusing your model or not.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ok papers but lacking of related works, important baselines and well-motivated storyline.\", \"review\": \"This paper formulates a new deep learning method called ChoiceNet for noisy data. Their main idea is to estimate the densities of data distributions using a set of correlated mean functions. They argue that ChoiceNet can robustly infer the target distribution on corrupted data.\", \"pros\": \"1. The authors find a new angle for learning with noisy labels. For example, the keypoint of ChoiceNet is to design the mixture of correlated density network block. \\n\\n2. The authors perform numerical experiments to demonstrate the effectiveness of their framework in both regression tasks and classification tasks. And their experimental result support their previous claims.\", \"cons\": \"We have three questions in the following.\\n\\n1. Related works: In deep learning with noisy labels, there are three main directions, including small-loss trick [1-3], estimating noise transition matrix [4-6], and explicit and implicit regularization [7-9]. I would appreciate if the authors can survey and compare more baselines in their paper instead of listing some basic ones.\\n\\n2. Experiment: \\n2.1 Baselines: For noisy labels, the authors should add MentorNet [1] as a baseline https://github.com/google/mentornet From my own experience, this baseline is very strong. At the same time, they should compare with VAT [7]. \\n\\n2.2 Datasets: For datasets, I think the author should first compare their methods on symmetric and aysmmetric noisy data [4]. Besides, the current paper only verifies on vision datasets. The authors are encouraged to conduct 1 NLP dataset.\\n\\n3. Motivation: The authors are encouraged to re-write their paper with more motivated storyline. The current version is okay but not very exciting for idea selling.\", \"references\": \"[1] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[2] M. Ren, W. Zeng, B. Yang, and R. Urtasun. Learning to reweight examples for robust deep learning. In ICML, 2018.\\n\\n[3] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NIPS, 2018.\\n\\n[4] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n\\n[5] J. Goldberger and E. Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.\\n\\n[6] S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus. Training convolutional networks with noisy labels. In ICLR workshop, 2015.\\n\\n[7] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n\\n[8] A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NIPS, 2017.\\n\\n[9] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
Ske7ToC5Km | Graph2Seq: Scalable Learning Dynamics for Graphs | [
"Shaileshh Bojja Venkatakrishnan",
"Mohammad Alizadeh",
"Pramod Viswanath"
] | Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques---that convert graphs to real-valued vectors for use with neural networks---are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present Graph2Seq, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq scales naturally to graphs of arbitrary sizes and shapes. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequence. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with Graph2Seq show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems.
| [
"graph neural networks",
"scalable representations",
"combinatorial optimization",
"reinforcement learning"
] | https://openreview.net/pdf?id=Ske7ToC5Km | https://openreview.net/forum?id=Ske7ToC5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygCwktN-E",
"r1lJ2MhzZE",
"rkxMyIoex4",
"SJgD7cA9k4",
"B1x4zMpOy4",
"SJeHmb5uJ4",
"BygyngcdyN",
"B1e-SxqOkN",
"SkxrWlq_kV",
"HJeJ-oBmJV",
"SyeD-YBXyN",
"B1eiod9GJ4",
"HJg70ZAFCm",
"BJe-fWUL0Q",
"HkgHmDu26Q",
"SJlWaUdha7",
"H1lebgVjp7",
"BkxxnJOcTX",
"Skxg-2w5pQ",
"rklM99wcTX",
"BklljYaanQ",
"BygIHT5jn7",
"SyexIerDnQ"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1546059622363,
1545941670988,
1544758746288,
1544378910778,
1544241676097,
1544229148888,
1544229030589,
1544228920655,
1544228861341,
1543883511076,
1543883006847,
1543837859196,
1543262667082,
1543033096746,
1542387484636,
1542387384902,
1542303735978,
1542254504313,
1542253560079,
1542253194335,
1541425560065,
1541283134264,
1540997191861
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"~Abishek_Sankararaman1"
],
[
"ICLR.cc/2019/Conference/Paper788/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/Authors"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper788/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"response\", \"comment\": \"Thanks for the comment. Yes the proof you have mentioned is valid, and useful in its conciseness.\\n\\nHowever, the proof we have given in the paper shows a construction for a \\u201csimpler\\u201d case where the graph is not disconnected. This is because in many practical applications if the input graph is disconnected one may do some preprocessing and either (i) process the individual connected components separately, or (ii) add extra edges (i.e., edges with special features) and make the graph connected. \\n\\nSince the paper focuses on combinatorial optimization, we have used minimum vertex cover as the function of choice in our proof. However, other functions or graph constructions may be possible even in the case where the graphs are not disconnected.\"}",
"{\"comment\": \"Fix any $k \\\\in \\\\mathbb{N}$ and consider two graphs, one is a cycle consisting of 4k nodes and the other is two disjoint cycles of 2k nodes each. Consider the function f(G) to denote the number of connected components. Clearly, the k hop neighborhood of all nodes in both the graph are identical, while the number of connected components are different.\\n\\nDoes this argument also prove Theorem 2 ? Or did I misunderstand some thing in the definition and the proofs ?\", \"title\": \"A Simpler Proof of Theorem 2 ?\"}",
"{\"metareview\": \"This was an extremely difficult case. There are many positive aspects of Graph2Seq, as detailed by all of the reviewers, however two of the reviewers have issue with the current theory, specifically the definition of k-local-gather and its relation to existing models. The authors and reviewers have had a detailed and discussion on the issue, however we do not seem to have come to a resolution. I will not wade into the specifics of the argument, however, ultimately, the onus is on the authors to convince the reviewers of the merits/correctness, and in this case two reviewers had the same issue, and their concerns have not been resolved. The best advice I can give is to consider the discussion so far and why this misunderstanding occurred, so that it might lead the best version of this paper possible.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Good paper, but there are some issues with the theory (either correctness or clarity) that need to be resolved.\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear reviewer,\\n \\nFirst, we would like to again thank for taking time to provide feedback which has helped improve the paper. But with all due respect, we do not see inconsistencies in our definition or arguments. What we are saying is that a GCN with k convolutional layers uses a neighborhood of distance k around each vertex to compute the embedding for that vertex, and is therefore a k-local-gather algorithm by our definition. Please note that this is also in agreement with the Kipf and Welling paper (https://arxiv.org/pdf/1609.02907.pdf). On page 3, after defining graph convolution (Eq. (6)), the paper says: \\n \\n\\\"Successive application of filters of this form then effectively convolve the k-th order neighborhood of a node, where k is the number of successive filtering operations or convolutional layers in the neural network model.\\\" \\n \\nPerhaps the confusion is that 4-local-gather in Proposition 2 refers to specific GCN realizations with up to 4 layers, such as Structure2Vec. We can clarify this in the paper by stating Proposition 2 in more general terms: A GCN with t layers is a t-local-gather algorithm. \\n\\nPlease note that a GCN that has a fixed number of layers cannot propagate information across the entire graph (for large input graphs). The main point we are making is that the number of convolutions (layers) should vary dynamically based on the graph, and we provide a theoretical justification with Theorems 1 and 2.\\n\\nWe reiterate that we have put in every effort into responding to reviewer comments thoroughly and respectfully. We hope the reviewer will extend us the same courtesy and consider our response.\"}",
"{\"title\": \"I\\u2019m done\", \"comment\": \"Dear authors,\\n\\nThis is the last comment I\\u2019ll make. I do not appreciate that you make claims about there being no inconsistencies, write several paragraphs of arguments, none of which is resolving the issue. \\n\\n(1) \\u201ca k-step message passing algorithm such as Kipf and Welling is a valid k-local-gather algorithm where the k 1-hop local update operations *together* constitutes the local phase procedure of a k-local-gather algorithm.\\u201d\\n\\n(2) \\u201cThe 'k' in a k-local-gather algorithm stands for the size of the neighborhood considered, not the number of local steps. Our definition is consistent in saying GCN algorithms are valid k-local-gather algorithms.\\u201d\\n\\nIn (1) you write that k is the number of message passing steps. \\n\\nIn (2) you write that k is the \\u201csize\\u201d of the neighborhood considered. (I assume size = distance.)\\n\\nThe GCNs of Kipf and Welling do not only consider the 1-hop neighborhood, if the GCNs have more than one layer. Also, you state that GCNs are 4-local-gather. But that would mean there are only 4 local updates (according to (1)) which is not true. \\n\\nYour every effort to clarify misunderstandings currently makes things worse. \\n\\nThe only way to make (1) and (2) consistent, is when you consider k to be the distance in the unrolled computation graph. \\n\\nBut then GCNs (K&W) are not 4-local-gather.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for your feedback.\\n\\nPlease note that S2V-dynamic is not an existing model. We are not aware of any prior work that has proposed varying the number of local steps in GCNs dynamically. \\n\\nAlso, the point of our work is not the specific architectural differences between S2V-dynamic and G2S-RNN. In fact, our method can be used on top of *any* GCN method to improve generalization. \\n\\nRather, our point is that varying the number of local steps used in GCN algorithms results in better generalization. Therefore, the fact that S2V-dynamic has better performance than standard S2V (with a fixed number of iterations) affirms our main result. \\n\\nFurthermore, looking at the sequence of local step outputs results in even better generalization. This is why using a GRU in G2S-RNN improves performance even over S2V-dynamic.\"}",
"{\"title\": \"Author response\", \"comment\": \"GCN algorithms including Kipf and Welling, Structure2Vec etc., all perform a fixed number of local steps (e.g., 4), regardless of the size of the input graph. We are not aware of any work where these algorithms dynamically adjust the number of local steps based on the input graph instance.\\n\\nThe point of our work is that dynamically adjusting the number of local steps, results in algorithms that generalize and scale better. In addition, we argue that considering outputs of all intermediate local steps (i.e., the sequence of local step outputs) in the aggregation step, results in even better generalization.\"}",
"{\"title\": \"Author response 2\", \"comment\": \"In summary, our definition of a k-local-gather algorithm consists of one local phase and one aggregation phase. Message passing algorithms such as Kipf and Welling consist of k steps of 1-hop local update operations. However, a k-step message passing algorithm such as Kipf and Welling is a valid k-local-gather algorithm where the k 1-hop local update operations *together* constitutes the local phase procedure of a k-local-gather algorithm.\\n\\nThe 'k' in a k-local-gather algorithm stands for the size of the neighborhood considered, not the number of local steps. Our definition is consistent in saying GCN algorithms are valid k-local-gather algorithms. \\n\\nWe have made every effort to clarify all the misunderstandings, in this thread and in the paper. We hope the reviewers will take that into account.\"}",
"{\"title\": \"Author response 1\", \"comment\": \"Thank you for highlighting this cause for confusion.\\n\\nWe define a k-local-gather algorithm as a two-phase algorithm comprising of a local phase, and a gather phase. In the local phase each vertex computes a representation based on its k-hop neighborhood subgraph. In the gather phase, the vertex representations are aggregated to compute the final output of the algorithm. \\n\\nGCN algorithms such as Structure2Vec perform k steps of local neighborhood aggregations, followed by an aggregation step. Note that the k steps of local neighborhood aggregations effectively compute a vector-representation for the vertices that is a function of their k-hop neighborhood subgraphs. Therefore, GCN algorithms are valid k-local-gather algorithms according to our definition above. \\n\\n*There are no inconsistencies in our definition.*\", \"we_elaborate_this_point_further_below\": \"Our definition of a k-local-gather algorithm does not place a restriction on the specific procedure that is used to compute the vertex representations in the local phase, or the output in the gather phase. The procedure used for computing the vertex representations in the local phase can be arbitrary, as long as it only depends on the k-hop neighborhood subgraph of vertices. Similarly, the procedure used for computing the output in the gather phase can be arbitrary, as long as it only depends on the set of vertex representations computed in the local phase.\\n\\nFor example, a procedure in which the representation of a vertex is computed as the number of vertices in its k-hop neighborhood subgraph is a valid local phase procedure in a k-local-gather algorithm. Another example is a procedure where the representation of a vertex is 1 if there is a cycle in its k-hop neighborhood subgraph, and 0 if its k-hop neighborhood subgraph does not have cycle (i.e., is a tree). \\n\\nThe k local steps of a GCN algorithm is also an example of a valid local phase procedure in a k-local-gather algorithm. To see this, we first explain a procedure that is easily seen to be a valid local phase procedure, and then point out that the procedure is equivalent to k step GCN algorithms. \\n\\nConsider a graph G whose vertices we want to represent. For a vertex v, we will first build a \\\"computation tree\\\" that depends on the k-hop neighborhood subgraph of v. Using this computation tree for v, we will obtain a representation for v. \\n\\nThe computation tree will have v as the root. The root node has all the 1-hop neighbors of v (in G) as its children in the tree. Now, every node u at the first level of the tree has all the 1-hop neighbors of u as its children. We continue building the tree this way---at every level, adding the children in the 1-hop neighborhood of nodes of that level to the next level---till a depth of k. Notice that this computation tree depends only on the k-hop neighborhood subgraph of v. \\n\\nTo compute the representation for v, we initialize the leaf nodes of v's computation tree with all-zero vectors. We will perform computation starting from the leaves of the tree and propagate vectors upwards towards the root of the tree. Each node, when it receives vectors from all of its children, computes a function over these vectors, and then forwards it to its parent in the computation tree. Finally, when the root receives vectors from all of its children, it computes a function over these vectors, and we obtain the representation for v. \\n\\nTherefore, a procedure that computes vertex representation this way (using a computation tree) is a valid local phase procedure in a k-local-gather algorithm. \\n\\nWith a little bit of thought, it can be seen that this procedure corresponds precisely to how vertex representations are computed in a k-step message passing algorithm such as in Kipf and Welling. In a message passing algorithm, each vertex aggregates the current vector-state of its neighbors and updates its own vector-state based on that. In a k-step message passing algorithm, the update occurs k times. However, we will get the same resulting vertex representations if we \\\"unfold\\\" the computation tree around each vertex and propagate the vectors from the leaves to the root of this tree. The levels of the trees correspond to the message passing round number (with leaf nodes denoting round 0, the initial message passing state). As the computation progresses from the leaves to the root in the computation tree, the intermediate vector-outputs computed by nodes at level i correspond precisely to the vector-state of the vertices at the i-th round of message passing in G. Thus, a k-step message passing algorithm is a valid local phase procedure in a k-local-gather algorithm.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for the response and the updated paper.\\n\\nRegarding point 1, the explanation in the appendix is now clear; thank you. \\nRegarding point 2, I posted a comment in reply to the discussion with Reviewer 2 below.\\nRegarding point 3, it's great that you managed to run all these additional experiments so quickly. As you mention in the paper, S2V-Dynamic seems to perform similarly to Graph2Seq.\\n\\nAt the moment, I am most comfortable keeping my score unchanged. The theory and experiments seem to suggest that you have an interesting model but it is not clear that it is better than similar existing models (e.g. S2V-Dynamic) nor theoretically more powerful (ref. discussion about the k-local-gather notion below).\"}",
"{\"title\": \"Agreed\", \"comment\": \"Thanks to the authors and Reviewer 2 for discussing this point in detail.\\n\\nI strongly agree with Reviewer 2 on this issue, and that is why I asked about the 4-local-gather in my original review.\\n\\nI too cannot see how the Graph2Seq model is infty-local-gather but the Kipf-GCN or Structure2Vec are not, given that both are iterative algorithms that can aggregate information from all nodes if run for diameter-iterations.\"}",
"{\"title\": \"Discussion\", \"comment\": \"Dear authors,\\n\\nThanks for the further responses. I agree with all of your statements (which do not contradict what I said in my response) until this part:\\n\\n\\\"A k-local-gather algorithm, according to our definition, consists of two steps: a local step which computes an embedding for each vertex based on its k-hop neighborhood, and a gather step which aggregates embeddings from all vertices. In particular, the local step occurs only once, and is not computed k times. Therefore, for unlabeled feature-less graphs, such as the one we have considered in the proof, information from outside the k-hop neighborhood of a vertex cannot reach the vertex in just one step of local computation. Consequently, a k-hop local-gather algorithm cannot distinguish vertices having identical k-hop neighborhood even if the l-hop neighborhoods (for l > k) are different.\\\"\\n\\nBut this definition of k-local-gather is problematic. For instance, the Kipf and Welling GCN is applied in a way that the local (aggregation) steps do *not* occur only once. That's why this method is able to propagate information throughout the graph. Again, GCNs do not just apply the local step once. Therefore, your statement that GCNs are 4-local-gather (with the definition you mention here) is incorrect. \\n\\nYou cannot have it both ways. \\n\\nEither you define the \\\"k\\\" in local-gather has the distance to the currently considered node used to aggregate neighborhood information *in every learning step*. Then my argument above that 1-WL is 2-local-gather (if no node degrees are available on the nodes and otherwise 1-local-gather) holds. And my argument still holds that your proof is not correct since you prove for k=2. \\n\\nOr you define the \\\"k\\\" in local-gather as you wrote in the response above. You only apply the k-hop aggregation once. But then statements such as those made in Proposition 2 cannot be correct. Then GCNs are not 4-local-gather since you are not applying the neighborhood aggregation step only once.\\n\\nDue to these inconsistencies which are caused by inconsistent/ambiguous definitions in the best case and by incorrect arguments in the worst case, I will keep the original score. The paper has improved especially since you added additional timing results but the issues above are severe enough that I don't have enough confidence in the correctness of the statements made in the paper.\"}",
"{\"title\": \"Revised paper\", \"comment\": \"We have uploaded a revision including the run times for max cut and maximum independent set problems, in Appendix D.3.\"}",
"{\"title\": \"Revised paper\", \"comment\": \"We have uploaded a revision of the paper with the following changes:\\n\\n1. We have run experiments where the depth of Structure2Vec is varied as in Algorithm 2, and have included its results in Figures 2, 3 and 4. This algorithm is called S2V-dynamic in the Figures and can be found as the black curve. \\n\\n2. We have collected run times of all the schemes for computing minimum vertex cover on Erdos-Renyi graphs of various sizes. Appendix D.3 presents these results. We are currently collecting the run times for max cut, maximum independent set and will include those results also shortly. \\n\\n3. We have expanded the proof of Theorem 2 in Appendix B.5 to clarify the extension of proof to the general k case. We have added a subfigure in Figure 5 showing how the trees can be constructed for k=3. A new Figure 6 has also been added showing the optimum vertex covers for the trees of Figure 5. Lastly, we have included a remark after the proof of Theorem 2 explaining the effect of node labels on local-gather algorithms. \\n\\n4. We have modified the introduction to better motivate why the scalability of Graph2Seq is useful. Specifically, we have included device placement in TensorFlow, query optimization and job scheduling as examples of practical applications where learning scalable graph algorithms, such as Graph2Seq, could be beneficial. \\n\\n5. We have explained what infinity-local-gather means in Section 3.2. \\n\\n6. We have briefly explained how node labels can influence the number of local steps after Theorem 2 in Section 3.2 (detailed explanation provided in Appendix B.5). \\n\\n7. We have provided an explanation on how edge features can be included in Graph2Seq in Appendix B.1. \\n\\n8. We have mentioned the time-complexity of G2S-RNN in Section 5 (under Testing subsection) and have derived it in Appendix C.3. \\n\\nWe once again thank the reviewers for their valuable feedback which has helped greatly improve the paper.\"}",
"{\"title\": \"Author response 2\", \"comment\": \"Extending proof to larger k:\\nThe proof of Theorem 2 can easily be extended to larger k by considering the same two trees in Figure 5, but with longer chains around each degree-3 node. Currently, around each degree-3 node there are chains of nodes, with three nodes in each chain. For a general k, we would increase the chains of three nodes to chains of l nodes, where l is any odd number greater than k. This has already been mentioned in the proof. The proof itself is identical. We will elaborate it further in the revision. \\n\\nWe realize these can be confusing points and thank you for bringing this to attention. We will include explanations in the paper to clarify node labeling and the other points. We hope this answer clarifies any misunderstandings. \\n\\n2. Run times: For large graphs (size > 400) we indeed observed that Gurobi is much slower and took many hours or days. We will include measurement data on how long it takes for Gurobi and other baselines to compute a solution in our revision.\"}",
"{\"title\": \"Author response 1\", \"comment\": \"Thank you for the comments.\\n\\n1. Clarification on node label:\\nWe first note that node labels greatly affect how many local aggregation steps are required to compute a function on a graph. For example, if nodes are labeled by their degrees then, as you have correctly pointed out, a 1-local algorithm is enough to distinguish vertices in the example of line graphs such as 1-2-3-4-5. In general, having node degree as a label can allow vertex embeddings in a k-local algorithm to depend on the (k+1)-hop neighborhood around the vertices. In the extreme case, node labels can encode the entire structure of the graph, e.g., if the adjacency matrix is used as a label. In this case, even a 0-local algorithm will be able to exactly compute any function over the graph because the entire graph can be inferred just by looking at a node's label. The graphs we consider in the proof of Theorem 2 are unlabeled graphs. In unlabeled graphs, attributes such as node degree are not inherently part of a node and need to be explicitly computed.\", \"wl_being_1_local_vs_2_local\": \"\", \"we_note_that_theorem_2_makes_an_existential_statement\": \"for a fixed k, we are saying there *exists* some graph G and some function f(.) such that f(G) cannot be computed exactly by any k-local-gather algorithm. Therefore, to prove the theorem, all we need is one example graph G and one example function f(.) where f(G) cannot be computed exactly by any k-local-gather algorithm. We have freedom to select whatever G and f(.) we want, as long as we can prove f(G) cannot be computed exactly by any k-local-gather algorithm.\\n\\nThe graph G that we choose in our proof is an *unlabeled* tree shown in Figure 5, without any node or edge attributes or features. If we run a 1-hop WL on such a graph, the first step will be to assign the respective degrees of nodes as the starting color. However, the graph we have chosen is such that there are no features intrinsically present within each node. In particular, the node degree information is absent in the nodes, and hence need to be explicitly computed. \\n\\nIf we consider the 0-hop neighborhood of each node---basically just the nodes themselves---then we cannot compute the node degree since a node in isolation does not reveal how many nodes it is connected to. Therefore, we must look at the 1-hop neighborhood---the node together with all the nodes it is connected to---in order to compute the degree. Once the initial color (node degree) has been computed, we can then proceed to do one step of neighborhood color aggregation as per the WL algorithm and compute the updated node colors. Thus overall, we have performed two steps of 1-hop aggregation operations---one for computing the node degrees, and one for aggregating neighboring colors---which makes the 1-hop WL algorithm a 2-local-gather algorithm in our chosen graph. \\n\\nFor some other choice of the graph, the 1-hop WL algorithm will be a 1-local-gather algorithm. For example, we can choose the graph to be such that each node intrinsically includes its degree as a label. In such a graph, the initial node degree color can be computed by looking at the 0-hop neighborhood of nodes, since the node itself (in isolation) includes this information; we do not need to look at the 1-hop neighborhood.\", \"clarification_of_definition_of_local_gather\": \"A k-local-gather algorithm, according to our definition, consists of two steps: a local step which computes an embedding for each vertex based on its k-hop neighborhood, and a gather step which aggregates embeddings from all vertices. In particular, the local step occurs only once, and is not computed k times. Therefore, for unlabeled feature-less graphs, such as the one we have considered in the proof, information from outside the k-hop neighborhood of a vertex cannot reach the vertex in just one step of local computation. Consequently, a k-hop local-gather algorithm cannot distinguish vertices having identical k-hop neighborhood even if the l-hop neighborhoods (for l > k) are different. \\n\\nAlgorithms such as the k-WL or k-step GCNN perform 1-hop local neighborhood aggregation repeatedly for k (or k+1) steps. However, these algorithms are still valid k-local-gather (or (k+1)-local-gather) algorithms. This is because: (i) the k (or k+1) steps of 1-hops aggregation causes each vertex embedding to depend on its k (or k+1) hop neighborhood, and (ii) we can think of the k (or k+1) steps of 1-hop aggregation in these algorithms together as forming the local step of a k-local-gather (or (k+1)-local-gather) algorithm. This is why a 1-hop WL algorithm which performs two steps of 1-hop aggregation operations is a valid 2-local-gather algorithm.\"}",
"{\"title\": \"Misunderstandings\", \"comment\": \"Dear authors,\\n\\nLet me quickly clarify some misunderstandings.\", \"theorem_2\": [\"The statement \\\"the degree label already includes one-hop information\\\" is not correct. One-hop means considering information that is up to one hop away. The degree of the node itself is \\\"0\\\"-hops away.\", \"In your proof, you chose k=2 and then show two graphs of which you claim that there are nodes that have the same 2-hop neighborhood and, therefore, cannot be distinguished by a 2-local-gather approach. What I have tried to point out is that even if two nodes have the same neighborhood (for k = 1), they can be distinguished with 1-local-gather. The argument can be extended for larger k as well. Just increase the size of the chain: A-B-C-D-E-F-G. Now, C and D have the same 2-hop neighborhood and *still* can be distinguished with 1-WL which is a 1-local-gather approach (even if it is a 2-local-gather approach, as you claim, which is not correct because 1-WL only looks at the 1-hop neighborhood). What you are missing is that an algorithm that \\\"only\\\" looks at the k-hop neighborhood can still distinguish two nodes with identical k-hop neighborhood (but different l-neighborhood, l>k) due to the iterative (i.e., recursive) application of the algorithm. The information is propagated throughout the graph. That's what the 1-WL algorithm is doing.\", \"You proof for k=2 (which is not correct as pointed out above) and write: \\\"the example easily generalizes for larger k\\\". You would have to at least make an argument how you want to generalize.\", \"Overall, there seem to be several gaps in your proof. Again, the statement \\\"a k-local-gather approach cannot distinguish nodes in the graph that have the same k-hop neighborhood\\\" is incorrect.\"], \"baselines\": \"Since your motivation seems to be solving combinatorial optimization problems with a graph NN and since according to your response you have used Gurobi, why not report the running times? The main problem one is faced when working with combinatorial optimization problems is the time it takes to solve them. Even if you don't stand a chance to be faster than Gurobi on smaller problems, you should be able to generate larger problems and, eventually, there should be a break-even point where your method is faster than Gurobi. It is these types of investigations that I am completely missing in the paper. There is not a single mention of the time it took to run the baselines, Gurobi, or your proposed method. (Please correct me if I'm wrong.)\\nSure, it is interesting that you can approximate the solutions more tightly but at what cost? If your method takes always longer than an optimal solver, what's the point?\", \"concerning_experiments_on_other_benchmark_datasets\": \"This is not strictly necessary. I agree. But it would make the paper much stronger.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the helpful comments.\\n\\n1. Motivation: Our primary motivation is learning algorithms for combinatorial optimization on graphs. In many practical applications, it is desirable to learn an algorithm on smaller graphs that can generalize to larger graphs. For example, Mirrhosseini et al. [1] consider the problem of deciding how to optimally assign TensorFlow ops to GPUs to minimize runtime. Since directly training placement policies on large TensorFlow graphs can be extremely slow, it would be beneficial if a model can be trained on small TensorFlow graphs in a way that generalizes to large TensorFlow graphs. Another example is query optimization in databases, where the optimal order of join operations in the query plan tree has to be determined [2]. Since evaluating complex queries with large query plans can be expensive, it is again helpful if the learning algorithm can be trained on simple queries in a way that generalizes to complex queries. We will modify the introduction to emphasize these use cases. \\n\\n2. Theorem 2: The chain graph 1-2-3-4-5 results in the partitions {1, 5}, {2, 4} and {3} after the 1-hop WL algorithm if the initial node labels are chosen as their respective degrees. Since the degree label already includes one-hop information, this means, overall it is a 2-local-gather algorithm and not a 1-local-gather algorithm. If the initial node labels are chosen identically for all nodes, then the partitions are {1, 5}, {2, 3, 4}. We would appreciate clarification on what parts of the proof of Theorem 2 are unclear or informal. \\n\\n3. Baselines: As our focus is on combinatorial optimization problems, comparing on benchmark node or graph classification datasets is outside the scope of this paper and is an important future research direction. We have compared Graph2Seq to existing deterministic solvers (Gurobi), heuristics (list), approximation algorithms (matching, greedy) and a range of graph neural networks. Note that the performance plots for the Gurobi solver is implicit in the plots (e.g., Figure 2 and 3) since the approximation ratio for all other schemes have been computed relative to the Gurobi solver. The plots corresponding to the list heuristic (brown), matching algorithm (green) and greedy heuristic (yellow) are explicitly shown in Figures 2 and 3. \\n\\n[1] Device placement optimization with reinforcement learning, Mirhoseini et al, 2017\\n[2] Learning to optimize join queries with deep reinforcement learning, Krishnan et al, 2018\"}",
"{\"title\": \"Author response\", \"comment\": \"Thank you for the helpful comments.\\n\\n1. Time complexity: Thank you for pointing out the complexity. It is as follows:\\n\\n(a) The time-complexity for one forward pass of G2S-RNN (e.g., to select one vertex in minimum vertex cover) is O((E + V)T_max). This is because during each step of Graph2Seq, O(E) operations are required to update the state at each vertex based on neighbors' states, and O(V) operations are required by the GRU to aggregate the states of all vertices (Equation 19 in appendix). Since these operations have to be repeated at each step, and there are at most T_max steps, the time-complexity is O((E + V)T_max).\\n\\n(b) For a fixed number of steps T, the time-complexity to compute a complete solution (e.g., to select multiple vertices such that they form a valid vertex cover) is O((E + V)T_max * V). This is because selecting one vertex has complexity O((E + V)T_max), and we may have to select O(V) vertices to obtain a valid solution to the input graph.\\n\\n(c) The overall time-complexity is O((E + V)T_max * V * T_max). This is because the final solution is computed by first computing valid solutions for each T=1,2,..,T_max, and then picking the best valid solution from among them. Computing a valid solution for a fixed T takes O((E + V)T_max * V) as mentioned above, and we have to repeat the process T_max times. \\n\\nNote that aggregating states from all the vertices in the GRU is a hyperparameter choice. If only local neighborhood states are used in the GRU, the time-complexity in step (a) above becomes O(ET_max + V). \\n\\nWe will clarify the complexity in Section 5. \\n\\n2. Local Gather definition: Yes, the definition of local-gather consists of one local step followed by one gather step. An algorithm is k-local-gather if in the local step, each vertex computes an embedding based on the k-hop neighborhood graph around the vertex. Structure2Vec is 4-local-gather because the four embedding iterations cause each node's embedding to depend on its 4-hop neighborhood. Graph2Seq is infinity-local-gather since the infinite number of embedding iterations cause each node's embedding to depend on the entire graph---not just on vertices a constant number of hops away from the node. Infinity is used to emphasize that the local graph neighborhoods on which the node embeddings depend are not constrained in size. For a specific graph G with diameter dia(G), it is also true that Graph2Seq is dia(G)-local-gather. We will include a remark to explain that infinity-local-gather means that a node's embedding can depend on the entire graph, regardless of the graph size.\\n\\n3. Comparison to Structure2Vec: We will include an experiment that applies algorithm 2 to Structure2Vec in the paper. Note, however, that applying this procedure to Structure2Vec implicitly uses the sequence of Structure2Vec embeddings as the embedding for each vertex. Therefore, this method is a different instantiation of our idea of using sequences for node embeddings. In particular, like Graph2Seq, this method will also consider neighborhoods of different sizes around each vertex for different graphs. The only difference is that Graph2Seq additionally uses an LSTM to process the sequence. Therefore, we indeed expect that the combination of Algorithm 2 with Structure2Vec will also perform well.\"}",
"{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the helpful comments.\", \"including_edge_features\": \"There are a few different ways to include edge features. One way would be to include a second term $\\\\sum_{e\\\\in\\\\eta(v)}y_e(t)$, where $\\\\eta(v)$ are edges incident to node v and y_e are edge features of edge e, inside the ReLU function of Equation 1. Another way is to transform the graph with edge features into a new (larger) graph where there are no edge features. This is done by converting the original graph into a new bipartite graph where one partite corresponds to vertices of the original graph, and the other partite corresponds to edges of the original graph. Each edge-node in the bipartite graph is connected to the two vertex-nodes that constitute its end points in the original graph. The edge-nodes have the edge features of the original graph, while the vertex-nodes have the vertex features. We will explain this in the revision.\"}",
"{\"title\": \"A good paper for the conference\", \"review\": \"Graph representation techniques are important as various applications require learning over graph-structured data. The authors proposed a novel method to embedding a graph as a vector. Compared to Graph Convolutions Neural Networks (GCNN), the proposed are able to handle directed graphs while GCNN can not. Overall the paper is good, the derivation and theory are solid. The authors managed to prove the proposed representation is somehow lossless, which is very nice. The experiment is also convincing.\\n\\nMy only concern is as follows. The authors claim that Eq. (1) is able to handle features on vertices or edges. However, in the current formulation, the evolution only depends on vertex features, thus how can it handle edge features?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Need some clarification\", \"review\": [\"This paper proposes a new representation learning model for graph optimization, Graph2Seq. The novelty of Graph2Seq lies in utilizing intermediate vector representation of vertices in the final representation. Theoretically, the authors show that an infinite sequence of such intermediate representations is much more powerful than existing models, which do not maintain intermediate representations. Experimentally, Graph2Seq results in greedy heuristics that generalize very well from small training graphs (e.g. 15 nodes) to large testing graphs (e.g. 3200 nodes).\", \"Overall, the current version of the paper raises a number of crucial questions that I would like the authors to address before I make my decision.\", \"First, some strengths of the paper:\", \"Theory: although I have not reviewed the proofs in details, the theorems are very interesting. If correct, the theorems provide a strong basis for Graph2Seq. In contrast, this aspect is missing from other work on ML for optimization.\", \"Experiments: the experiments are generally thorough and well-presented. The performance of Graph2Seq is remarkable, especially in terms of generalization to significantly larger graphs.\", \"Writing: the paper is very well-written and complex ideas are neatly articulated. I also liked the Appendix trying to interpret the trained model. Good job!\", \"That being said, I have some serious concerns. Please clarify if I misunderstood anything and update the paper otherwise.\", \"Graph2Seq at test time: in section Testing, you explain how multiple solutions are output by G2S-RNN at intermediate \\\"states\\\" of the model, and the best w.r.t. the objective value is returned. If I understand all this correctly, you take the output of the T-th LSTM unit, run it through the Q-network, then select the next node (e.g. in a vertex cover solution). Then, the complexity should be O((E+V)*T_max*V), since the Graph2Seq operations are linear in the size of the graph O(E+V), a single G2S-RNN(i) takes O(V) times if you want to construct a cover of size O(V), and you repeat that process exactly T_max times, for each i between 1 and T_max. What's wrong in my understanding of G2S-RNN here? Or is your complexity incorrect?\", \"Local-Gather definition: in your definition of the Local-Gather model, do you assume that computations are performed for a single iteration, i.e. a single local step followed by a gather step? If so, then how is Graph2Seq infinity-local-gather? What does that even mean? I understand how some of the other GCNN-based models like Khalil et al.'s is 4-local-gather (assuming 4 embedding iterations of structure2vec), but how is Graph2Seq infinity-local-gather?\", \"Comparison to Structure2Vec: for fair comparison, why not apply Algorithm 2 to that method? Just run more embedding iterations up to T_max, and use the best among the solutions constructed between 1 and T_max.\"], \"minor\": [\"Section 4: Vinyals et al. (2015) does not do any RL.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea with weaknesses in the formal and empirical parts\", \"review\": \"The authors propose a method for learning vector representations for graphs. The problem is relevant to the ICLR community.\\n\\nThe paper has, however, three major problems:\\n\\nThe motivation of the paper is somewhat lacking. I agree that learning representations for graphs is a very important research theme. However, the authors miss to motivate their specific approach. They mention the importance of learning on smaller graphs and applying the learned models to larger graphs (i.e., extrapolating better). I would encourage the authors to elaborate on some use cases where this is important. I cannot think of any at the moment. I assume the authors had use cases in combinatorial optimization in mind? Perhaps it might make sense to motivate the use of GNNs to solve vertex cover etc. \\n\\nI\\u2019m not sure about the correctness of some of the theorems. For instance, Theorem 2 states \\n\\u201cFor any fixed k > 0, there exists a function f(\\u00b7) and an input graph instance G such that no k-LOCAL-GATHER algorithm can compute f(G) exactly.\\u201d I\\u2019m not claiming that this is a false statement. What I am suspecting at the moment is that the proof might not necessarily be correct. For instance, it is known that what you call 1-LOCAL-GATHER can compute the 1-Weisfeiler-Leman partition of the nodes (sometimes also referred to as the 1-WL node coloring). Now consider the chain graph 1 - 2 - 3 - 4 - 5. Here, the partition that puts together 1-WL indistinguishable nodes are {1, 5}, {2, 4} and {3}. Hence, the 1-WL coloring is able to distinguish say nodes 2 and 3 even their 1-neighborhood looks exactly the same. A similar argument might apply to your example pairs of graphs but I haven\\u2019t checked it yet in detail. What is for sure though: what you provide in the appendix is not a proper formal proof of Theorem 2. This has to be fixed. \\n\\nThe experiments are insufficient. The authors should compare to existing methods on common benchmark problems such as node or graph classification datasets. Comparing to baselines on a new set of task is not enough. Why not compare your method also on existing datasets?\\nIf you motivate your method as one that performs well on combinatorial problems (e.g., vertex cover) you should compare to existing deterministic solvers. I assume that these are often much faster at least on smaller graphs.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rklQas09tm | Learning Corresponded Rationales for Text Matching | [
"Mo Yu",
"Shiyu Chang",
"Tommi S Jaakkola"
] | The ability to predict matches between two sources of text has a number of applications including natural language inference (NLI) and question answering (QA). While flexible neural models have become effective tools in solving these tasks, they are rarely transparent in terms of the mechanism that mediates the prediction. In this paper, we propose a self-explaining architecture where the model is forced to highlight, in a dependent manner, how spans of one side of the input match corresponding segments of the other side in order to arrive at the overall decision. The text spans are regularized to be coherent and concise, and their correspondence is captured explicitly. The text spans -- rationales -- are learned entirely as latent mechanisms, guided only by the distal supervision from the end-to-end task. We evaluate our model on both NLI and QA using three publicly available datasets. Experimental results demonstrate quantitatively and qualitatively that our method delivers interpretable justification of the prediction without sacrificing state-of-the-art performance. Our code and data split will be publicly available. | [
"interpretability",
"rationalization",
"text matching",
"dependent selection"
] | https://openreview.net/pdf?id=rklQas09tm | https://openreview.net/forum?id=rklQas09tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1gbe4TxgV",
"HygaVw0TA7",
"S1xX-vAaRm",
"SkxQPUA6RQ",
"SJeyrnU0h7",
"H1eiY7Ys2X",
"Sye_Ff482m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544766440710,
1543526196986,
1543526139233,
1543525978844,
1541463094701,
1541276546832,
1540928128085
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper787/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper787/Authors"
],
[
"ICLR.cc/2019/Conference/Paper787/Authors"
],
[
"ICLR.cc/2019/Conference/Paper787/Authors"
],
[
"ICLR.cc/2019/Conference/Paper787/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper787/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper787/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper attempts at modeling text matching and also generating rationales. The motivation of the paper is good.\\n\\nHowever there is some shortcomings of the paper, e.g. there is very little comparison with prior work, no human evaluation at scale and also it seems that several prior models that use attention mechanism would generate similar rationales. No characterization of the last aspect has been made here. Hence, addressing these issues could make the paper better for future venues.\\n\\nThere is relative consensus between the reviewers that the paper could improve if the reviewers' concerns are addressed when it is submitted to future venues.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}",
"{\"title\": \"Thank you very much for your valuable comments\", \"comment\": \"A word-by-word soft attention is somewhat different, offering only an approximate version of the rationale we are after. We outline three reasons for this below.\\n\\nFirst, a soft attention does not provide any certificate of exclusion. By this we mean that any word receiving a small attention weight (as long as it is not zero) could be significantly amplified in later processing. We therefore cannot conclude that only words with large attention weights substantially influence the prediction. \\n\\nThe second difference arises from how soft attention is typically computed. The attention is often based on context vectors associated with each word in q and p (e.g. hidden states of an LSTM). It is therefore unclear whether a particular attention score is driven by the surrounding context or the word in question. Put another way, if two specific text spans have high matching (attention) scores, this may no longer hold if we re-encoded those spans without their surrounding contexts. \\n\\nThe third reason has to do with training. A thresholded word-by-word attention matrix, without re-training with the threshold, is neither sufficient (first reason) nor corresponded (second reason). Directly using the thresholded attention matrix as an explainer can therefore be expected to lead to a significant performance drop. Indeed, we specifically demonstrate this by evaluating a soft-attention model (MatchLSTM) and its thresholded version on SearchQA. The performance of these two models on the development set is 54.8 and 50.2, respectively. It is not clear how to best re-train a thresholded attention. In fact, it would seems somewhat challenging without resorting to exactly the type of routines we proposed in the paper.\"}",
"{\"title\": \"Thank you very much for your valuable comments\", \"comment\": \"Thank you very much for your valuable comments. We believe there is some misunderstanding in terms of the difference between our rationalization model and attention-based model. Please refer to the comments to Reviewer 1 for details. Your other concerns are addressed below.\\n\\n1. Adversarial evaluation on SciTail\\n\\nIntuitively, if the added noise p\\u2019 is totally irrelevant to the context of p, the predictive model should not rely on any information from p'. However, it is very hard to achieve by many conventional models, especially for these based on word-by-word attention. For example, consider the Match-LSTM that computes the contextual similarity between each word from both sides of the text. And then, it constructs aggregated representations for the final prediction using soft-attentions normalized from the similarities. Even the attention scores on p' are small, the aggregated representations cannot eliminate effects from these texts. And the worse thing is that often the attention scores on p' are not very small, especially when the same word/entity appears in both q and p' (even the contexts are totally different). This is one potential reason why many QA models are vulnerable to adversarial examples by appending noise, which is shown in the paper of \\\"Adversarial Examples for Evaluating Reading Comprehension Systems\\\" by Robin Jia and Percy Liang. Of course, not selecting text from p' does not mean the selected rationales are correct. However, it at least demonstrates the generated rationales from our model are fairly robust to adversarial noise.\\n\\n\\n2. Tagging v.s. span prediction\\n\\nOur rationale generation is formulated as a tagging problem on q and span prediction problem (use <start> and <end> tokens) on q. The reason why we consider a tagging problem on generating rationales of q is that it can easily achieve the need of the variable number of rationales (via petition). Then, given a rationale from q, we consider the corresponded rationale generation as span prediction is because that the generated rationale is guaranteed to be consecutive, which is much easy to optimize (via policy gradient) compare to sequence tagging with a continuity regularizer. \\n\\n\\n3. Overlapping y\\n\\nThere is no explicit regulation to enforce each rationale $y^k$ to be mutually exclusive text span. The selection of y mainly depends on the selected x. Different x might correspond to the same piece of text. On the other hand, the sparsity loss is summed over all different $y^k$, which is implicitly to avoid overlapping (e.g. prevent the trivial case that all y^k selects the same text that is the entire p). \\n\\n4. Dataset and OpenIE as the rationale\\n\\nAs discussed in Section 4.1, we choose AskUbuntu because it is the only text matching dataset used in previous literature on (independent) rationale extraction. We choose the other two mainly because they could provide the \\\"Factwise\\\" settings and help better evaluate the \\\"sub-task of generating corresponded rationales from p\\\". \\n\\nWe agree that it is difficult to tell whether the OpenIE tuples and SearchQA queries can be used as rationales from the linguistic view. However, first, empirically they provide us reasonable results when the \\\"No Rationalization w/ re-encoding\\\" was applied. Second and more importantly, for the \\\"Factwise\\\" experiments, i.e., the rationale is already provided on one side, we mainly hope to have a setting on only the sub-task of \\\"generating corresponded rationales from p\\\". In this way, we have a closer analysis of the performance and difficulties of this sub-task. This sub-task itself is not studied in previous work. We believe it is important to provide such results along with the end-to-end results.\"}",
"{\"title\": \"Thank you very much for your valuable comments\", \"comment\": \"Thank you very much for your valuable comments.\\n\\nIn terms of the concern about the extra sentence can change the label of NLI, we agree that there could be the case. But the chance is small since in our experiment we always select the extra premise, which pairs with a different hypothesis. \\n\\nWe also agree that the setting like adversarial SQuAD could be potentially a better choice for the evaluation. However, this paper focuses on general text matching problem. Although SQuAD relies a lot on text matching, it also requires additional engineering for QA. For the future version, we will consider adding additional experiments on adversarial QA or other hypothesis verification task (like FEVER, where a hypothesis holds as long as one passage can be found supporting the hypothesis. In this way, extra sentences won't change the label of the verification task).\\n\\nMoreover, the \\u201cindependent\\u201d baseline extracts rationales on passages without access to fine-grained question rationale pieces. The extraction model re-implements prior work of (Lei et al., 2016). Intuitively, this method only learns to extract generally useful textual patterns for the task no matter what the questions/hypotheses are. The extraction could be still useful for narrow domains like AskUbuntu, in which these patterns are limited. While in the open-domain setting and with the decrease of rationale sizes, its performance drops dramatically. Thus extracting corresponded pairing rationales is crucial.\"}",
"{\"title\": \"Interesting model with somewhat promising experiments, could benefit from some more comparisons\", \"review\": \"This paper is about learning paired rationales that include the corresponding relevant spans of the (question, passage) or (premise, hypothesis). Experimental results show the same or better accuracies using just the fraction of the input selected as when the whole input is used.\\n\\nWhile there has been prior work on learning rationales, this is the first I have seen that included this fine-grained pairing. The paper also learns these rationales without explicitly labeled rationales but rather with only the distant supervision of the overall question answering or natural language inference task.\\n\\nThis paper could be made stronger by including an experimental evaluation of accuracy in an adversarial setting. The model developed here might be well-suited for adversarial SquAD examples in which an extra sentence has been added. It would be interesting to see these results. This paper does include a somewhat similar adversarial evaluation (Section 4.3) but adds extra information to NLI examples. Since for NLI, unlike QA, the extra sentence can change the correct label (can flip from entailment to contradiction), accuracy was not able to be evaluated.\\n\\nExperimentally, it would be good to compare against some prior work that doesn't include the pairing. Perhaps an interpretability model based on the passage only without fine-grained pairing with the question? My apologies if this corresponds to \\\"Independent\\\", I was somewhat confused by descriptions of the baseline.\\n\\nThe descriptions of the baselines was the least clear part of this paper. It would be helpful to improve the clarity of Section 4.1 (perhaps adding a figure).\", \"optional_suggestion\": \"consider breaking up the experiment section into two subsections: one for the cases in which the question rationales are provided (results in Table 1), and one for the cases in which the question-side rationales are learned as well (Table 2). By putting all descriptions together, the paper explains two different settings and then needs to discuss which baselines are applicable to each setting and dataset and why.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper tackles the problem of generating rationales for text matching problems (i.e., two pieces of text are given). The approach is in a similar spirit as (Lei et al, 2016) while the latter mainly focuses on one piece of text for text classification problems and this work focuses on generating pairs of rationales. The approach has been evaluated on NLI and QA datasets, which demonstrates that the generated rationales are sensible and comes at a cost of accuracy.\", \"the_approach_employs_a_generation_encoding_generation_schema\": \"it firsts generates the rationale from one side as a sequence tagging problem, re-encodes the rationales and predicts the rationale on the other side as a span prediction problem. Leveraging a match-LSTM framework and generated rationales for prediction, the model can be trained using a policy gradient method.\\n\\nOverall, I think this problem is novel and interesting. However, I am not fully convinced whether the proposed solution (and its implementation) is the right way to do so. Also, the paper writing needs to much improved.\\n\\nFirst of all, there is certainly a drop in the end task performance while it is unclear whether the derived rationales are really that useful (if the goal is interpretability) in the current evaluation. I am not convinced by the noisy SciTrail evaluation for rationales -- the noisy part p\\u2019 can be totally irrelevant and assume that the rationale generation component learns some sort of alignment between two parts, so it is not surprising that the model will not select words from p\\u2019 and it doesn\\u2019t really show that the rationales are useful. I think it is necessary to conduct some human evaluation for generate rationales and also provide some simple baselines for comparison (for example, just converting the soft-attention in math-LSTM to some hard selections) and see if this interpretability (at a cost of task performance) is really worthy or not.\\n\\nSecondly, I am not sure that whether the current way of generating the rationale pairs really makes sense or not.\\nIt casts the rationale generation on one side as a tagging problem while the rationale generation on the other side as a span prediction problem. Why is that? Do you make any assumption that the two pieces of texts are not symmetric (e.g., one side is much longer than the other side like most of the current QA setup)?\\n\\nThere is a regularization term for both x and y but it seems that there isn\\u2019t any constraint that the generated rationales on the y side are not overlapping. Is it a problem or not? I don\\u2019t know how this is dealt with in the implementation.\\n\\nUnderstanding sec 3 takes some efforts and I think the presentation could be much improved. For example, q * {x^k} is not defined -- I assume it means extracting the subset of q based on the 1\\u2019s in {x^k}. The equations in Sec 3.2 can be made clearer.\\n\\nFinally, it is also unclear that how the 3 datasets were chosen. There are so many NLI and QA datasets (some of them are more popular and more competitive) at this point. Is there a reason that these datasets were chosen? There is a setup called \\u2018no rationalization w/ re-encoding\\u2019 which means that the rationale is already provided on one side, but is unclear that whether the OpenIE tuple and the searchQA queries can be used as rationales directly.\", \"minor_points\": [\"Distal supervision -> distant supervision\", \"The first paragraph of Introduction, \\u201cabsent attention or rationale mechanisms\\u201d, what does it mean by \\u2018absent attention\\u2019? Isn\\u2019t it the case that all the models used attention mechanisms?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interpretability is important; but without human data we cannot evaluate it\", \"review\": \"This paper proposes an approach to introduce interpretability in NLP tasks involving text matching. However, the evaluation is not evaluated using human input, thus it is not clear whether the model is indeed meeting this important goal. Furthermore, there is no direct comparison against related work on the same topic, so it is not possible to assess the contributions over the state of the art on the topic. In more detail:\\n\\n- There are versions of attention mechanisms that are spare and differentiable. See here:\", \"from_softmax_to_sparsemax\": \"A Sparse Model of Attention and Multi-Label Classification\\nAndr\\u00e9 F. T. Martins, Ram\\u00f3n Fernandez Astudillo \\n\\n- Why is \\\"rationalizing textual matching\\\" different than other approaches to explaining the predictions of a model? As far as I can tell, thresholding on existing attention would give the same output. I am not arguing that there is nothing different, but there should be a direct comparison, especially since eventually the method proposed is thresholded as well by limiting the number of highlights.\\n\\n- A key assumption in the paper is that the method identifies rationales that humans would find useful as explanations. However there is no evaluation of this assumption. For a recent example of how such human evaluation could be done see:\\nD. Nguyen. Comparing automatic and human evaluation of local explanations for text classification. NAACL 2018\", \"http\": [\"//www.dongnguyen.nl/publications/nguyen-naacl2018.pdf\", \"I don't agree that explanations are sufficient if removing them doesn't degrade performance. While these two are related concepts, the quality of the explanation to a human is different to a system. In fact, more text can degrade performance when it is unrelated. See the experiments of this paper:\", \"Adversarial Examples for Evaluating Reading Comprehension Systems.\", \"Robin Jia and Percy Liang. EMNLP 2017: http://stanford.edu/~robinjia/pdf/emnlp2017-adversarial.pdf\", \"Reducing the selection of rationales to sequence tagging eventually done as classification is suboptimal compared to work on submodular optimization (cited in the introduction) if being concise is important. A comparison is needed.\", \"There is an argument that the training objective makes generated rationales corresponded and sufficient. This requires some evidence to support it.\", \"What is the \\\"certificate of exclusion of unselected parts\\\" that the proposed method has?\", \"An important argument is that the performance does not degrade. However there is no comparison against state of the art models to verfiy it.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HJeQToAqKQ | TherML: The Thermodynamics of Machine Learning | [
"Alexander A. Alemi",
"Ian Fischer"
] | In this work we offer an information-theoretic framework for representation learning that connects with a wide class of existing objectives in machine learning. We develop a formal correspondence between this work and thermodynamics and discuss its implications. | [
"representation learning",
"information theory",
"information bottleneck",
"thermodynamics",
"predictive information"
] | https://openreview.net/pdf?id=HJeQToAqKQ | https://openreview.net/forum?id=HJeQToAqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJlZC24We4",
"HJlM26N9AX",
"rklxcTVcAQ",
"HkxVBTEqAX",
"H1lj4p7GaQ",
"BklzybW0hQ",
"ByeW2hxR3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544797385514,
1543290282305,
1543290247821,
1543290171705,
1541713202598,
1541439705760,
1541438633499
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper786/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper786/Authors"
],
[
"ICLR.cc/2019/Conference/Paper786/Authors"
],
[
"ICLR.cc/2019/Conference/Paper786/Authors"
],
[
"ICLR.cc/2019/Conference/Paper786/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper786/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper786/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Connecting different fields and bringing new insights to machine learning are always appreciated. But since it is challenging to do it needs to be done well. This paper falls short here.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Ambitious aim but not well-enough done\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the review.\\n\\nWe agree Section 4 is rather terse. Given space constraints we weren't able to describe things in much detail and currently leave too much unsaid. We thought the analogy was interesting enough to discuss, even if not in detail.\\n\\nDo you think the paper would be improved if Section 4 was eliminated entirely? Is the rederivation of existing objectives laid out in the initial sections novel enough to stand on its own?\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the review and careful read of the paper, and constructive criticism.\\n\\nWe agree that there is more work to be done in further exploring the analogy to thermodynamics, but at least at present thought the existence of the analogy was interesting enough to warrant the current draft. We hope to develop tighter analogies and analytical examples for simple systems, but realistically we are already very pressed for space.\\n\\nWe agree that more should be said about the choice of world Q and its implications. Part of the difficulty is the world Q shown in the body of the main work isn't necessarily the best, but it is the one that most directly connects with a wide range of existing objectives in the literature. We thought it is interesting to note that existing objectives can be motivated as minimizing an information projection to the world Q shown. We should better emphasize that if the reader finds world Q suspect, but trusts our general program, this could reasonably fuel a suspicion of the utility of the existing objectives.\\n\\nWe are still investigating the utility of the modified objective in Appendix A. (We note that world Q in Appendix A is Markov equivalent to the one in the main body, the Z<-X arrow simply changed direction.) We suspect it might actually prove a more useful objective in practice than the existing formulation. We suspect the existing objective has the form it does not for any deep reason but because people naturally think in terms of decoders as a natural element of learning a useful representation. That in the infinite family limit there is an equivalence in the two forms of objective for a parametric representation with and without a decoder we find interesting. But at present we can only point out this equivalence as we haven't finished the experimental investigation yet.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the review. We agree the paper is dense at places, but it is already pressing up against the page limit, we were unsure of how to balance the scope of things we wish to talk about with the limited space.\\n\\nIn the original objective (eqn. 7) (that was minimized) I(Z;X\\u0398) - I(Z;X) appeared, this is identically I(Z;\\u0398|X) by the chain rule (eqn 9). Does this clarify why I(Z;\\u0398|X) is minimized? We will try to make this more clear in the text.\\n\\nGiven space constraints would additional examples in appendices help?\\n\\nWe agree Section 4 is terse at present. Again, space constraints means it had to be pretty condensed.\"}",
"{\"title\": \"Interesting perspective connecting many machine learning objectives\", \"review\": \"This paper introduces an information-theoretic framework that connects a wide range of machine learning objectives, and develops its formal analogy to thermodynamics.\\nThe whole formulation attempts to align graphical models of two worlds P & Q and is expressed as computing the minimum possible relative information (using multi-informations) between the two worlds. Interestingly, this computation consists of four terms of mutual information, each of which is variationally bounded by a meaningful functional: entropy, rate, classification error, distortion. Finding points on the optimal feasible surface leads to an objective function with the four functionals, and this objective is shown to cover many problems in the literature. The differentials of the objective bring this framework to establish formal analogies between ML and thermodynamics: the first law (the conservation of information), the Maxwell relations, and the second law (relative entropy decrease). \\n\\nThe main contribution of this paper would be to provide a novel and interesting interpretation of previous ML techniques using an objective function in an information theoretic viewpoint. Drawing the objective from the tale of two worlds and connecting them with existing techniques is impressive, and the analogies to thermodynamics are reasonable. I appreciate this new perspective of this paper and think this direction is worth exploring for sure. The terms and relations derived in the course of this work might be useful for understanding or analyzing ML models. \\n\\nOn the other hand, this paper is not easy to follow. It\\u2019s written quite densely with technical details omitted, and in some parts lacking proper explanations, contexts, and implications. \\nE.g., \\n- In section 2, why the world Q is what we want?\\n- Among the mutual information terms, it\\u2019s not clear why I(Z_i; X_i, Theta) need to be minimized. After the chain rule, while the part of I(Z_i;Theta | X_i) needs to be minimized, isn\\u2019t that I(Z_i; X_i) needs to be maximized? \\n- The functionals and their roles (Section 2.1) need to be more clarified.\\n- In the first paragraph of Section 3, why is that \\u201cany failure of the distributional families \\u2026. feature surface\\u201d?\\nFor a broader audience, I recommend the authors to clarify with more explanations, possibly, with motivating examples.\\n- Formal analogies to thermodynamics (Section 4) are interesting, but remains analogies only without any concrete case of usefulness. The implications of the first and second laws are not explained in detail, and thus I don\\u2019t see their significance. In this sense, section 4 appears incomplete. I hope they are clarified.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Formal analogy needs better motivation, clarity, and perhaps worked examples\", \"review\": \"This paper attempts to establish a notion of thermodynamics for machine learning. Let me give an attempt at summary. First, an objective function is established based on demanding that the multi-information of two graphical models be small. The first graphical model is supposed to represent the actual dependence of variables and parameters used to learn a latent description of the training data, and the model demands that the latents entirely explain the correlation of the data, with the parameters marginalized out. Then, a variational approximation is made to four subsets of terms in this objective function, defining four \\\"thermodynamic\\\" functionals. Minimizing the sum of these functionals puts a variational upper bound on the objective. Next, the sum is related to an unconstrained Lagrange multiplier problem making use of the facts (1) that such an objective will likely have many different realizations of the thermodynamic functionals for specific value of the bound and (2) that on the optimal surface the value of one of the functional can be parametrized in terms of the three others. If we pick the entropy functional to be parameterized in terms of the others, we find ourself precisely in the where the solution to the optimization is a Boltzmann distribution; the coefficients of the Lagrange multipliers will then take on thermodynamic interpretations in of temperature, generalized chemical potentials, etc. At this point, the machinery of thermodynamics can be brought to bear, including a first law, Maxwell relations (equality of mixed partial derivatives), etc.\\n\\nI think the line of thinking in this paper is very much worth pursuing, but I think this paper requires significant improvement and modifications before it can be published. Part of the problem is that the paper is both very formal and not very clear. It's hard to understand why the authors are establishing this analogy, where they are going with it, what's its use will be, etc. Thermodynamics was developed to explain the results of experiments and is often explained by working out examples analytically on model systems. This paper doesn't really have either such a motivation or such examples, and I think as a result I think it suffers.\\n\\nI also think the \\\"Tale of Two Worlds\\\" laid out in Section 2 requires more explanation. In particular, I think more can be said about why Q is the the \\\"world we want\\\" and why minimizing the difference between these worlds is the right way to create an objective. (I have no real problem with the objective once it is derived.) Since this paper is really about establishing this formal relationship, and the starting point is supposed to be the motivating factor, I think this needs to be made much clearer.\\n\\nThe I(Z_i, X_i, Theta) - I(X_i, Z_i) terms could have been combined into a conditional mutual information. (I see this is discussed in Appendix A.) This leads to a different set of variational bounds and a different thermodynamics. Why do we prefer one way over the other? At the level of the thermodynamics, what would be the relationship between these different ways of thinking? Since it's hard to see why I want to bother with doing this thermodynamics (a problem which could be assuaged with worked examples or more direct and clear experiments), it's hard to know how to think about this sort of freedom in the analogy. (I also don't understand why the world Q graphical model is different in Appendix A when we combined terms this way, since the world Q lead to the objective, which is independent of how we variationally bound it.) I think ultimately the problem can be traced to the individual terms in the objective (7) not being positive definitive, giving us the freedom to make different bounds by arranging the pieces to get different combinations of positive definite terms. How am I supposed to think about this freedom?\\n\\nIn conclusion, I would really like to see analogies like this worked out and be used to better understand machine learning methods. But for this program to be successful, I think a very compelling case needs to be made for it. Therefore, I think that this paper needs to be significantly rewritten before it can be published.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A formal framework for representation learning.\", \"review\": [\"This paper builds on the (Alemi et al 2018) ICML paper and presents a formal framework for representation learning. The authors use a graphical model for their representation learning task and use basic information theoretic inequalities to upper-bound their measure of performance which is a KL divergence. The authors then define the optimal frontier which corresponds to the lowest possible upper-bound and write it as an optimization problem. Written with Lagrange multipliers, they obtain several known cost functions for different particular choices of these parameters.\", \"Then the authors make a parallel with thermodynamics and this part is rather unclear to me. As it is written, this section is not very convincing:\", \"section 4.1 after equation (27) which function is 'smooth and convex'? please explain why.\", \"section 4.1 '...the actual content of the law is fairly vacuous...'\", \"section 4.2 the explanation of equation (30) is completely unclear to me. Please explain better than 'As different as these scenarios appear (why?)...'\", \"section 4.2 'Just as in thermodynamics, these susceptibilities may offer useful ways to characterize...'\", \"section 4.2 'We expect...'\", \"section 4.3 ends with some unexplained equations.\", \"As illustrated by the examples above, the reader is left contemplating this formal analogy with thermodynamics and no hint is provided on how to proceed from here.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rklXaoAcFX | Geomstats: a Python Package for Riemannian Geometry in Machine Learning | [
"Nina Miolane",
"Johan Mathe",
"Claire Donnat",
"Mikael Jorda",
"Xavier Pennec"
] | We introduce geomstats, a Python package for Riemannian modelization and optimization over manifolds such as hyperspheres, hyperbolic spaces, SPD matrices or Lie groups of transformations. Our contribution is threefold. First, geomstats allows the flexible modeling of many a machine learning problem through an efficient and extensively unit-tested implementations of these manifolds, as well as the set of useful Riemannian metrics, exponential and logarithm maps that we provide. Moreover, the wide choice of loss functions and our implementation of the corresponding gradients allow fast and easy optimization over manifolds. Finally, geomstats is the only package to provide a unified framework for Riemannian geometry, as the operations implemented in geomstats are available with different computing backends (numpy,tensorflow and keras), as well as with a GPU-enabled mode–-thus considerably facilitating the application of Riemannian geometry in machine learning. In this paper, we present geomstats through a review of the utility and advantages of manifolds in machine learning, using the concrete examples that they span to show the efficiency and practicality of their implementation using our package | [
"Riemannian geometry",
"Python package",
"machine learning",
"deep learning"
] | https://openreview.net/pdf?id=rklXaoAcFX | https://openreview.net/forum?id=rklXaoAcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1l3uB2A14",
"HJlGVIQ9CX",
"HJxEOxqZpX",
"rkgs_koa2m",
"HyeF-4_9hm",
"SkefgeUtnX",
"ryeC8TwO3Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544631668125,
1543284266012,
1541673068082,
1541414770767,
1541207040827,
1541132265977,
1541074261747
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper785/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper785/Authors"
],
[
"ICLR.cc/2019/Conference/Paper785/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper785/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper785/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper785/AnonReviewer3"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Learning on Riemannian manifolds can be easily done with this Python package. Considering the recent work on these in latent-variable models, the package can be quite a useful approach.\\n\\nBut its novelty is disputed. In particular Pymanopt is a package that does mostly the same, even though that may be computationally more expensive. The merits of Geomstats vs. Pymanopt is not clarified. But be that as it may, there is interest amongst the reviewers for the software package.\\n\\nIn the end, too, it's not uniformly agreed upon that a software-describing paper fits ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"perhaps for another venue?\"}",
"{\"title\": \"Geomstats: beyond optimization, a flexible package for modelization and machine learning on Riemannian manifolds\", \"comment\": \"We thank the reviewers for their constructive feedback which has shown us the need to clearly highlight the impact and scope of our contribution. We answer their concerns regarding the novelty, practicality and use of the package. In summary, we feel that the reviewers have focused too much on the \\u201cRiemannian optimization\\u201d side of Geomstats, thus partly neglecting our package\\u2019s novelty which is its completeness and flexibility in terms of Riemannian geometry.\\n \\n# Novelty with respect to other softwares\\n\\nR1 and R2 expressed concerns on the novelty of Geomstats in light of other existing softwares to perform optimization on manifolds. We contend that Geomstats was built with a completely different purpose than these softwares: its objective is to foster research and use of \\u201cRiemannian\\u201d data models. This translates into a set of unique specificities:\\n\\n1. Geometry : the package\\u2019s object-oriented design of Riemannian geometry is in itself a (novel) mathematical contribution. Having common categories of objects and operations for differential geometric structures such as affine connection spaces, Riemannian spaces, Lie groups, etc. is not easy to realize and requires an important effort to verify if theorems and notions continue to hold for each structure (done in our unit tests). This is not at all present in any other packages.\", \"our_package_is_also_the_first_to_provide_a_flexible_choice_of_riemannian_metrics_and_corresponding_methods\": \"metrics and left- or right- invariant metrics on Lie groups are not implemented in any of the other packages. R1 argues that these metrics could be easily added to Pymanopt and we disagree. The metrics would come within Pymanopt only for the purpose of optimization, which considerably restricts their use. Moreover, they wouldn\\u2019t come with exponential and logarithm maps, inner products at each tangent space, tangent pca, etc., because Pymanopt focuses on efficient optimization rather than completeness.\\n\\nOur package thus positions itself as a package of Riemannian modelization, with significantly more geometric features than Pymanopt which reflect this specific purpose.\\n\\n2. Flexibility: Geomstats is uniquely flexible and modular enough to allow researchers to contribute their code. For instance, a researcher recently contributed an algorithm for optimal quantization on Riemannian manifolds, making use of our exponential maps and enriching the package with new methods. This could not have been implemented in the context of Pymanopt. This flexibility is also reflected in the multiple use of Riemannian geometry that Geomstats permits. It is in particular the only package that draws a heavy accent on statistical analysis of manifolds (new notion of mean, variance, etc.), which come readily implemented in Geomstats--making it a unique gateway to principled inference on manifolds.\\n \\nIt seems that the reviewers would have preferred one specific objective for Geomstats (e.g. optimization, or one specific example of its use), but this would have been against its purpose which is meant to be broad, as a fertile ground for geometry in machine learning.\\n \\n3. Practicality \\n \\nGeomstats is in python and fully integrated with numpy, while Manopt and Roptlib are being mainly devised for Matlab or Julia, thus severely hindering the adoption of Riemannian geometry by the machine learning community.\\n \\nGeomstats is yet the only Riemannian software directly integrated with Tensorflow and Keras, allowing GPU computations and thus computational efficiency and the use of Riemannian geometry in large scale problems, as shown in Sec 5.3 and [1]. Using Geomstats in this large scale deep learning application improves the performance in all image metrics ([1] Table 2), while training time is comparable (communication from [1]). \\n\\n# Software document\\n \\nR3 considers our submission as more of a software document than a scientific paper. We introduce a software meant for machine learning research, being conform to ICLR\\u2019s call for papers on \\u201csoftware platforms\\u201d.\\n \\nWe thank the reviewers again for the attention given to our contribution and hope that we have lifted their objections to our manuscript. We believe this package is of general interest with a potentially broad impact for the ML community and should get visibility at ICLR conference to foster collaborative work around geometry.\\n\\n[1] Hou, B., Miolane, N., Khanal, B., Lee, M., Alansary, A., McDonagh, S., Hajnal, J., Ruecket, D., Glocker, B., Kainz, B.: Deep pose estimation for image-based registration. MICCAI 2018.\"}",
"{\"title\": \"Nice package but with limited novelty and largely undemonstrated advantages to existing frameworks\", \"review\": \"The paper introduces the software package geomstats which provides simple use of Riemannian manifolds and metrics within machine learning models. Like theanogeometry, geomstats provides a backend for fast computation. Instead of theano, they interface tensorflow and numpy.\\n\\nThe core problem the author\\u2019s have to argue against is the existence of various other packages like pymanopt (which are mentioned in the paper) providing similar functionality.\\n\\nThe main advantage to pymanopt is stated to be lower computational cost. Unfortunately, this is not evaluated empirically. Pymanopt similarly provides the option to provide the cost function with tensorflow and uses numpy/scipy internally, therefore also making use of vectorization. A favorable empirical comparison would have been a compelling case for geomstats. While geomstats provides some more metrics than pymanopt, it lacks in other areas in comparison. Such metrics could be added relatively easily to pymanopt (or some of the other competing libraries).\\n\\nTruly novel is the update for Keras which allows Riemannian gradient descent on parameters living on manifolds. Unfortunately, this is not directly shown and discussed further in the paper, but the reader is referred to the code base. While plenty of examples were provided in the supplementary material, I\\u2019d have preferred to see specific example(s) being shown and discussed in the paper. In the end, the main paper alone only gives an overview of what exists, but gives me no idea on how the package is used.\\n\\nIn parts, the paper reads more like an argument in favor of Riemannian modelization and optimization, instead of advocating for the specific package. While it is very important to demonstrate potential applications for the framework - an area in which this paper excelled - other, more important parts (mentioned above), were omitted because of it. Similarly, a lot of time is needlessly spent on defining well-known manifolds.\\n\\nOn the formal side, the formatting of the citations within the text don\\u2019t adhere to the official style guide which prescribes the use of authors\\u2019 last names and year.\\n\\nOverall, the software package seems to provide nice functionality with integration into a currently popular machine learning framework, but it\\u2019s novelty compared to existing software packages is limited. The novel parts (performance improvements, integration with keras) are not sufficiently demonstrated in the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novelty not clear\", \"review\": \"Summary:\\nThe paper is well written and easy to follow. The paper proposes a Python package for optimization and applications on Riemannian manifolds.\", \"comments\": \"C1.\\nThe main concern of the package is on novelty. That there exist other packages, e.g., Pymanopt [1], Manopt [2], ROPTLIB [3], which do a similar job as Geomstats. It is hard to understand from the paper on what is the key contribution of the present package. The paper does try to highlight the differences between the Goemstats package and others by emphasizing the lack of \\u201cchoice of other metrics and high computational costs of\\u201d others. This is, however, not shown in the paper as to how Geomstats is better in terms of computational complexity. On the choice of the metrics too, it should be noted that all the new geometries of the manifolds (for different metrics) can be easily added to the toolboxes [1,2,3]. Those toolboxes are modular and come with a lot of solvers and have already been used in many large-scale applications.\\n\\nHaving said that, one key strength of the Geomstats package is that it can be used in a deep learning framework in a relatively straightforward manner. This is mentioned in the paper but is not properly emphasized. However, here also, the paper does not say what algorithms can be used as part of Geomstats (except the stochastic gradient descent optimizer). \\n\\nOverall, I got the impression that the proposed package positions itself as a package of Riemannian manifolds (and all of the differential geometric notions). If that is so, then this is problematic. Pymanopt and others already do that fairly well, and consequently, the justification for a new package on that basis is difficult. \\n\\n[1] https://pymanopt.github.io/\\n[2] https://www.manopt.org/\\n[3] https://www.math.fsu.edu/~whuang2/Indices/index_ROPTLIB.html\\n\\n\\nC2.\\nThe citations are not properly rendered. It is hard to distinguish between the equations in the paper and the references in the paper.\\n\\nC3.\\nInstead of multiple use cases in the paper, could the paper focus one particular use case to show the important functionalities of Geomstats? Currently, it seems that the paper is more about surveying where all Riemannian geometry is useful.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"More of a software document than a scientific paper!\", \"review\": \"This paper introduces Geomstats, a geometric toolbox for machine learning on Riemannian manifolds. In comparison to previous packages such as manopt and pymanopt, the paper claims that the proposed software provide more efficient implementations, and is integrated with deep learning backends. Several potential applications settings for the software are explored, introducing some performance gains on specific problems when one resorts to the geometry of the space. An example setting for deep learning on SE(3) is also presented.\", \"strengths\": \"The use of such a toolbox could be a significant step to leveraging geometric models in deep learning.\", \"weakness\": \"1. The paper is written as more of a software document than a scientific paper. Several well-known manifolds are presented in the setting of Geomstats, however what is lacking is a treatment of some of the internals of how backpropagation can be implemented effectively for such a toolbox. For example, many of the Riemannian geometric algorithms need to resort to numerical algorithms such as SVD and such; how effective or feasible is automatic differentiation in dealing with such cases? \\n\\n2. The paper discusses computational advantages in the introduction -- however such advantages are not quantitatively analyzed across different platforms or prior softwares. \\n\\nOverall, it is not clear why this work needs to be treated as a scientific paper? It appears to be more of a tutorial on the use of the proposed software.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"geomstats has a potential to generate large impact to the community\", \"review\": \"This white paper presents the geomstats package. The package provides tools for Riemannian modelization and\\noptimization over manifolds. Especially, the package supports several important manifolds: hyperspheres, hyperbolic spaces, spaces of SPD matrices or Lie groups of transformations.\", \"pros\": \"1. the paper shows ever use cases of machine learning with manifolds. These use cases are concrete and representative. \\n2. the code in the package is extensively tested.\", \"cons\": \"There is no discussion about the scalability of the package.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"comment\": \"This is not an official review, just a few comments from an interested reader.\\n\\nThe contribution in the present paper is a piece of software, i.e. no new methodology is presented. This seem to be in line with the ICLR CfP, though personally I would prefer fairly few \\\"software only\\\" papers at conferences I attend.\\n\\nI wanted to ask the authors, how tensorflow and keras are modified? The paper states \\\"We provide modified versions of keras and tensorflow...\\\" Does this mean that you have forked these libraries and modified them? Does that mean that using \\\"geomstats\\\" imply not using my own tensorflow/keras installation?\", \"title\": \"How is tensorflow and keras modified?\"}"
]
} |
|
Hk4fpoA5Km | Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning | [
"Ilya Kostrikov",
"Kumar Krishna Agrawal",
"Debidatta Dwibedi",
"Sergey Levine",
"Jonathan Tompson"
] | We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments. | [
"deep learning",
"reinforcement learning",
"imitation learning",
"adversarial learning"
] | https://openreview.net/pdf?id=Hk4fpoA5Km | https://openreview.net/forum?id=Hk4fpoA5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1luFSOHI4",
"HyloBdL18N",
"S1xE3KP-fE",
"HkxrtRlLlV",
"S1gnoccmC7",
"HkeHJf0GRQ",
"SJxux2BMCm",
"H1gEMY4eAQ",
"BkgkROElAm",
"SJeQPruhpX",
"B1gGZfAq6Q",
"Byebk4m5Tm",
"rkeAEjMcT7",
"rJgV29zq67",
"SygDehUmpQ",
"rJePR4gGT7",
"BJxqUpTA3Q",
"rJgH5Yg0h7",
"HyebhMXa2X",
"ByeHNeho37",
"S1gpuXlih7",
"S1gzKHFunm",
"S1gOUczL3m",
"H1xlBqzU2X",
"BygoS4eHnQ",
"BkxfyEhmhm",
"B1l_dvkgn7",
"Byl5uav0oX"
],
"note_type": [
"official_comment",
"comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1551365503903,
1550964803383,
1546906028305,
1545109117220,
1542855331587,
1542803932805,
1542769647804,
1542633740274,
1542633671263,
1542387034658,
1542279674234,
1542235097071,
1542232886085,
1542232748035,
1541790702809,
1541698767339,
1541492049781,
1541437837068,
1541382825420,
1541287980911,
1541239669362,
1541080441616,
1540921936304,
1540921912497,
1540846658811,
1540764634236,
1540515696503,
1540418929718
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"~Samin_Yeasar_Arnob1"
],
[
"~Sheldon_Benard2"
],
[
"ICLR.cc/2019/Conference/Paper784/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper784/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper784/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Our implementation\", \"comment\": \"Thanks for your feedback regarding our paper! We extremely appreciate your efforts.\", \"we_have_open_sourced_the_original_implementation_of_our_paper\": \"\", \"https\": \"//github.com/google-research/google-research/tree/master/dac\"}",
"{\"comment\": \"Hi, even though we couldn't replicate the results in time for reproducibility challenge, after consulting with the authors we're able to get the similar results (we did our experiments on Hopper-v2, Ant-v2, Half-cheetah-v2, Walker2d-v2) with occasional performance drop (except for Half-cheetah-v2). According to the authors they used 10 random seeds and averaged over the found results but due to limited computational resource we're not able to confirm that we do confirm that it replicates the performance and sample efficiency from figure 4. I really appreciate authors effort for taking the time to evaluate our work and getting back to us.\", \"title\": \"Update on Reproducibility\"}",
"{\"comment\": \"As part of the ICLR 2019 Reproducibility Challenge, we (Sheldon Benard, Vincent Luczkow, & Samin Yeasar Arnob) attempted to replicate the results of Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Inverse Reinforcement Learning. Discriminator-Actor-Critic (DAC) is an adversarial imitation learning algorithm. It uses an off-policy reinforcement learning algorithm to improve upon the sampling efficiency of existing methods, and it extends the learning environment with absorbing states and uses a new reward function in order to achieve unbiased rewards. We were able to achieve comparable rewards and sample efficiency on two of the four environments. For the environments in which we were unable to reproduce the original results, we will continue to perform experiments and converse with the authors. All of our code is available at:\", \"https\": \"//github.com/vluzko/dac-iclr-reproducibility\", \"title\": \"ICLR 2019 Reproducibility Challenge Description\"}",
"{\"metareview\": \"This work highlights the problem of biased rewards present in common adversarial imitation learning implementations, and proposes adding absorbing states to to fix the issue. This is combined with an off-policy training algorithm, yielding significantly improved sample efficiency, whose benefits are convincingly shown empirically. The paper is well written and clearly presents the contributions. Questions were satisfactorily answered during discussion, and resulted in an improved submission, a paper that all reviewers now agree is worth presenting at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Well written paper highlighting and fixing a common problem in Adversarial Imitation Learning algorithms\"}",
"{\"title\": \"Response to the update from AnonReviewer2\", \"comment\": \"We again would like to emphasize that we appreciate your patience and valuable feedback that helps us to improve our submission.\\n\\nWe have updated the paper to try to address your suggestions. In particular:\\n\\n1) As per your suggestion, we extended the last paragraph of Section 3.1 in order to clarify our discussion on episode termination because of time limits. We believe that it adds clarity to the paper because it discusses termination states in more detail. It also explains the difference between absorbing states and rollout breaks. For a detailed discussion on implementation specific biases in algorithms (GAIL/AIRL) please refer section 4.1. \\n\\n2) In Section 4.1, we enumerate papers affected by this problem, with specific instances. For each paper that we cite in this section, we consider the official implementations provided by the authors. In the same section, we further elaborate on how exactly these algorithms are affected by the issue.\\n\\n3) In section 4.2, we assume infinite horizon for R_T since the series converges ( assuming reward bounded by r_max, the series is bounded by gamma/(1-gamma) r_max and thus can be computed either analytically or will converge in the limit using TD updates, section 3.1 now also includes a clarification of this point). We also extended Section 4.2 to clarify how absorbing states can be used by the AIL algorithms and how the corresponding transitions affect estimations of returns. Please see the second paragraph of Section 4.2.\\n\\n4) Regarding revisiting the illustrative example, we agree that the same reasoning might apply to Inverse RL algorithms in general and we appreciate your suggestion regarding the analysis of this simple example for MaxEnt-IRL. We unfortunately will not be able to add such an experiment before the end of the revision period, but we have added some discussion in Section 4 that the basic principle applies also to other IRL methods. This can be considered as an interesting direction of future work. e will attempt to add a better illustrative example in the final version (we just have not had time to do so), and will make sure to update the reviewers about it.\\n\\n5) We fixed the wrongly referenced section. Thanks for catching this. \\n\\nWe hope that this revision of our submission will address your concerns.\"}",
"{\"title\": \"Revised version is much better\", \"comment\": \"Thanks for the quick revision. The submission is much better now.\\nI updated my review to take the revised version into account, however, I did not feel comfortable adapting my rating quite yet (please refer to the review for an explanation).\\nI encourage you to further revise the submission.\"}",
"{\"title\": \"Additional response to AnonReviewer2\", \"comment\": \"Thank you for your detailed and encouraging response.\\n\\nWe have updated the paper to try to address your suggestions. We hope that this revised version more appropriately positions the contribution and draws a clear distinction between MDP formulation and algorithm, as per your suggestion. In particular:\\n\\n1. We now make it clear that the correct handling of absorbing states is something that should be applied to any inverse reinforcement learning or reward learning algorithm, whether adversarial or otherwise, and is independent of the DAC algorithm in that sense.\\n\\n2. We have added the suggested citation and other papers that discuss time limits (Pardo et al: https://arxiv.org/abs/1712.00378, Tucker et al: https://arxiv.org/abs/1802.10031 ) in the related work section.\\n\\n3. In Section 3, we've added a discussion of time limits in MDPs, as well as a discussion of how temporal difference methods can handle infinite-horizon tasks with finite-horizon rollouts (which is what DAC does also). Please see the last paragraph of the section.\\n\\n4. As per your suggestion, we have removed the illustrative example in Section 4.1.1. While we do believe that an example would help illustrate the issue to the reader, we understand your reservations against the illustrative example. We would like to attempt to add a better illustrative example in the final version(we just have not had time to do so), but we will be sure to make an additional post about it if we do, to confirm that it is satisfactory.\\n\\n5. For the sake of clarity, we removed the last paragraph from section 4.2 that discusses our choice of the implementation of bootstrapping for the terminal states.\\n\\nWe appreciate your patience, and would appreciate it if you took another look at the paper and let us know if this has addressed your concerns.\"}",
"{\"title\": \"Revised version is stell very misleading 2/2\", \"comment\": \"\\\"We propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (see Figure 1), that is\\ncompatible with both the popular GAIL and AIRL frameworks, incorporates explicit terminal state\\nhandling, an off-policy discriminator and an off-policy actor-critic reinforcement learning algorithm\\\"\\n- Don't say that DAC incorporates terminal state handling. Rather write something like\\n\\n\\\"We propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (see Figure 1), that extends GAIL and AIRL by replacing the policy update by the more sample efficient TD3 algorithm. Furthermore, our implementation of DAC includes a proper handling of terminal states that can be straightforwardly transferred to other inverse reinforcement learning algorithms. We show in ablative experiments, that our off-policy inverse reinforcement learning approach requires approximately an order of magnitude fewer policy roll-outs than the state of the art, and that proper handling of terminal states is crucial for matching expert demonstrations in the presence of absorbing states.\\\"\", \"end_of_introduction\": \"\\\"\\u2022 Identify, and propose solutions for the problem of bias in discriminator-based reward estimation in imitation learning.\\\"\\n- As far as I can tell, there is no bias is discriminator-based reward estimation. I don't think that the proposed solution has to do with discriminators at all, but would affect any IRL algorithm and all those IL algorithm that use RL in the loop. Change this point to something like \\\"Identify early termination of policy roll-outs in commonly used reinforcement learning toolboxes as cause of reward bias in the context of inverse reinforcement learning and propose a solution that allows to correctly match expert demonstrations in the presence of absorbing states.\\\"\\n\\nRelated work should discuss prior work related to incorrectly handling absorbing states (in RL or IRL). However, I don't think that there is much published literature about fixing implementation hacks. \\nI'm aware of a paper at last ICML [1] (that was previously rejected for ICLR due to lack of novelty), that discussed problems relating to time-limits in infinite horizon formulations which might be worth mentioning. \\n\\nSection 3 needs to explain how exactly the baselines implementation breaks IRL algorithms for absorbing states. The last paragraph of section 3.1. is not at all sufficient to communicate the root of the problem to the reader. Explain why the break in the roll-out violates the MDP formulation (which is assumed by the discussed algorithm) and that the learned reward function is thus not applied to the MDP. Add a new section (after 3.1 or 3.2) that is at least as detailed as my last comment.\\n\\nSection 4.1. also needs to be rewritten completely. There is no bias for the different reward formulations. Rather, applying IRL/IL algorithms without sufficient care to rl toolboxes that use hacky implementations can lead to different problems for different reward formulations.\\n\\nAlso section 4.1.1. still discusses the problem as if there was an inherent bias depending on reward formulation. Furthermore, I already pointed out several problems and errors related to the illustrative example (e.g. analysing an intermediate state of the algorithm, rather than a fixed-point). Maybe you could prove for your code example that AIRL does not converge and show a plot that compares the averages trajectory length for the buggy implementation with my naive fix.\\n\\nSection 4.2. seems like the main technical contribution. The last paragraph still looks fishy to me and the reported problem of using the analytically derived return seems to result from an assumed infinite horizon formulation. I think that the MDP formulation used for handling absorbing states seems to assume (potentially very large) finite horizons and hence, R_T should at least theoretically depend on the current time step. Given that both equations are analytically equivalent, one equation can not be more stable than the other. When, however, the explicit summation is performed until a given horizon is reached, whereas the closed form solution assumes an infinite horizon, the returned values differ and the closed form solution is simply not sound.\\n\\n[1] Time Limits in Reinforcement Learning, Fabio Pardo, Arash Tavakoli, Vitaly Levdik, Petar Kormushev,\\nProceedings of the 35th International Conference on Machine Learning, PMLR 80:4045-4054, 2018.\"}",
"{\"title\": \"Revised version is still very misleading 1/2\", \"comment\": \"I agree that communicating an idea that is relevant and important for a large subset of the community can justify publishing a research paper--even if the technical contribution is marginal. However the submission communicates an idea that in my opinion is just wrong. Namely, the submission communicates the idea that existing methods for IRL can not handle absorbing states and that learning reward functions that are always positive/negative can lead to an implicit bias. This is plain wrong and not helping the research community at all. Communicating this idea is not important but can be very harmful, especially when it is published at a conference like ICLR. I don't want to review papers next year that propose fixed offsets in order to enable their reward functions to produce both signs (and the like). I know that we need to sell our stuff and I'm fine with calling a TD3 replacement an all new algorithm. But, discussing a fix for a hack in a toolbox as an algorithmic enhancement just can not work out. The initial submission did not even give a hint that the bias is only caused by hacky implementations of the MDP, but pretended that it results from shortcomings of the algorithms. I agree that the the revised version is much better by admitting that it only applies to specific implementations. However, in order to clearly communicate the actual idea it is not sufficient to add one small paragraph, because the original, harmful narrative pervades the whole paper. I propose a number of modification (split over two comments due to character limits) to the paper, that I think are necessary to communicate to the reader how the improved performance was reached. The contribution of the revised version could be just enough to push it over the acceptance threshold.\", \"introduction\": \"\\\"[...] 2) bias in the reward function formulation and improper handling of environment terminal states introduces implicit rewards priors that can either improve\\nor degrade policy performance.\\\"\\n- This still makes the impression that both AIL methods are biased and can not handle absorbing states correctly.\\n \\n\\\"In this work we will also illustrate how the specific form of AIL reward function used has a large\\nimpact on agent performance for episodic environments. For instance, as we will show, a strictly\\npositive reward function prevents the agent from solving tasks in a minimal number of steps and a\\nstrictly negative reward function is not able to emulate a survival bonus. Therefore, one must have\\nsome knowledge of the true environment reward and incorporate such priors to choose a suitable\\nreward function for successful application of GAIL and AIRL. We will discuss these issues in formal\\ndetail, and present a simple - yet effective - solution that drastically improves policy performance\\nfor episodic environments; we explicitly handle absorbing state transitions by learning the reward\\nassociated with these states\\\"\\n- This paragraph needs to be completely rewritten. The form of the reward function (whether it is strictly positive or negative) does in theory not matter at all. It is completely fine to learn a reward function that only produces positive/negative values. Don't make the impression, that IRL researchers should start looking for ways to learn reward functions that can produce both signs. Furthermore, from a theoretical perspective GAIL and AIRL already explicitly learn rewards associated with absorbing states. This paragraph should clearly state that commonly used implementations of the roll-outs are not in line with the MDP-formulation which may be fine for RL but can lead to problems with IRL approaches. You may already want to point to the \\\"break\\\"-statement and state that it prevents the learned reward function from being applied to absorbing states. Although it is interesting to show how strictly positive/negative reward functions are affected by such implementations and it is nice to discuss these effects in the paper (maybe not in the introduction) and confirm them in the experiment, don't discuss the sign of the reward as the central problem. Also make sure to state, that you propose a different way of implementing the MDPs that allows early termination while fixing this problem. It is in my opinion crucial to discuss the problem and the solution in the context of implementing policy roll-outs / absorbing states. Make sure to show that this is a relevant problem that affects multiple toolboxes and that algorithms were incorrectly evaluated due to this issue - put in some references to undermine your claim that numerous work treat absorbing states incorrectly.\"}",
"{\"title\": \"Additional response to AnonReviewer2\", \"comment\": \"Thank you for your detailed response. We generally agree with the technical side of your description: MDPs with absorbing states require the absorbing states to be handled properly for IRL. This is in essence the point of this portion of our paper. We also agree that addressing this is not so much a new algorithm as it is a fix to the MDP. We have edited the paper to reflect this and clarify this point, please see the difference between the last revision and original submission (the abstract, sections 3.1 and 4). The fact that we test our solution by extending two different prior methods (GAIL and AIRL) reflects the generality of the solution.\\n\\nHowever, we respectfully disagree that this solution is obvious or trivial. Environments with absorbing states in the MuJoCo locomotion benchmark tasks have been used as benchmarks for imitation learning and IRL in one form or another for over two years. In this time, no one has corrected this issue, or even noted that this issue exists, and numerous works incorrectly treat absorbing states, resulting in results that are not an accurate reflection of the performance of these algorithms, as detailed in Section 5.2 and Figures 5,6 and 7 of our paper. This issue is severe, it is making it difficult to evaluate IRL and imitation algorithms, and as far as we can tell, most of the community is unaware of it. We believe that our paper will raise awareness of this issue and facilitate the development and evaluation of better IRL algorithms in the future. With your help, we have clarified this point further in our current paper. The purpose of a research paper is to communicate an idea that is relevant and important to a large subset of the community, and we believe that our paper does this.\"}",
"{\"title\": \"Why I think that the random resets are incorrectly implemented\", \"comment\": \"Let me elaborate on why I think that the failure of the existing methods to match the expert is caused solely by an incorrect implementation of the MDP and are not shortcomings of the actual algorithms.\\n\\nRoll-outs in an MDPs either have a fixed length (finite horizon) or an infinite length (infinite horizon). Variable length trajectory can be simulated by introducing absorbing states in a finite horizon formulation as mentions in section 3.1. of the submission. The infinite horizon case can be approximated using a large horizon and a time-dependent reward function for the discounting. However, in either case the absorbing states need to be treated in the same way as any other state in the MDP. Importantly, these states do not end the episode prematurely but just prevent the agent from entering any non-absorbing state and yield the same value for each policy. In reinforcement learning, we can typically stop the episode and return the Q-Value (which happens to equal the immediate reward, if the constant rewards of absorbing is assumed to be zero) which allows for more efficient implementations. However, it is important to note that the reward function is then only evaluated on the non-absorbing states and the rewards for absorbing states are implicitly assumed to be zero. Hence, when implementing policy roll-outs with a \\\"break\\\" one needs to be aware that the specified reward function does not correspond to the actual reward function of the MDP but affects only a subset of the possible state-action pairs (as those states of the MDP that we call \\\"absorbing\\\" will not be affected). This is well known in reinforcement learning, and even exploited by specifying constant offsets in the reward function for survival bonus / time penalty which would be useless if the specified reward function would be the actual reward function of the MDP.\\n\\nUsing such implementation of an environment which is targeted at reinforcement learning and using it for inverse reinforcement learning \\nis incorrect, because IRL algorithms are typically derived for learning the reward function for the whole MDP and not for a subset of the MDP. How can we expect an algorithm to learn the correct constant offset of a reward function (which does affect the optimal policy in the given implementation) using a formulation that implies that an offset does not affect the optimal behaviour?\", \"to_summarize\": \"The failure of GAIL and AIRL of matching expert demonstrations for some RL toolkits with absorbing states is caused by implementation hacks that are fine for RL problems and specific reward functions, but not for IRL. Indeed, the convergence problem of the code example can be solved simply by implementing the MDP in the way it is defined in section 3.1.--using a (discounted) fixed horizon and absorbing states. My code can be found at https://colab.research.google.com/drive/11w0McKxg7AA6ueTQNbfTtYAyVrSKgU2z\\nThe only changes were\\n- adding the missing state (s_g) and transition (sg->sg) to the MDP\\n- removing the break in the roll-out\\n- using a full expert trajectory as demonstration (including absorbing transitions) \\n- solving some numerical issues when both expert and agent have probability 0 for choosing a given action.\\nThe algorithm converges reliably to a policy that produces average trajectory lengths of 2.0\\n\\nOf course, the solution of the paper is a bit more elegant since it avoids to simulate the whole trajectory, but the effect should be the same. It is important to raise awareness of such pitfalls, but I do not think that it is enough to write an ICLR paper about--especially if it is discussed as an algorithmic improvement, when the algorithms are just fine. \\n\\nAlso in conjunction with the other minor contributions (using all trajectories for training the discriminator--without any theoretical justification, and using a more sample efficient policy update), I don't think that the contributions of the submission are sufficient.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the detailed and constructive feedback. We address the above mentioned points and add some additional experiments, as detailed below.\\n\\nc) \\u201cBy assigning more cumulative reward for s2_a1->s1 than for s2_a2->g, the policy would (after a few more updates) choose the latter action much less frequently than with probability 0.5 and the corresponding reward would grow towards infinity until at some point Q(s2,a2) > Q(s2,a1)--when the policy would match the expert exactly.\\u201d\\n\\u201cThe paper further argues that a strictly positive reward function always rewards a policy for avoiding absorbing states, which I think is not true in general. A strictly positive reward function can still produce arbitrary large reward for any action that reaches an absorbing state.\\u201d\\n\\n> This is a good point, and we will discuss this situation in more detail in the final paper. However, we do not believe that this directly applies to adversarial learning algorithms, such as the ones studied in our paper. We provide discussion as well as a numerical example below, which will be included in the paper. \\n\\nThe aforementioned situation can only happen in the limit, but the next discriminator update will return the policy to the previous state, in which it is more advantageous to take a loop, according to the GAIL reward definition. Therefore, the original formulation of the algorithm does not converge in this case. In contrast, learning rewards for the absorbing states will resolve this issue. \\n\\nMoreover, the example provided by the reviewer assumes that we can fix the reward function at some point of training and then train the policy to optimality according to this reward function; while devising a scheme to early terminate learning of the reward function is possible, it is not specified by the dynamic reward learning mechanisms of the GAIL algorithm, which alternates updates between the policy and the discriminator. Please see a simple script that illustrates the example (anonymous link):\", \"https\": \"//github.com/sfujim/TD3/blob/master/main.py#L123\\n\\na) We note that this does make a substantial difference in terms of sample efficiency over prior work on adversarial IL, as shown in Figure 4 -- we believe that such substantial improvements in efficiency are of interest to the ICLR community, though it is not the sole contribution of our paper.\\n\\nb) We did use normalized importance weights, but unfortunately did not find that the resulting method performed well, while simply omitted importance weights achieved good performance. We think that the naive way of estimating importance weights increases variance of updates. We will analyze this further in the final version, but for now we would emphasize that this is not the primary contribution of the work, but only a technical detail that we discussed for completeness.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the positive and constructive feedback.\\n\\nWe have extended the section 5.1 of the manuscript as suggested by the reviewer.\\n\\nBelow are detailed answers for the reviewer\\u2019s concerns: \\n\\n1) To simplify the exposition we omitted the entropy penalty as it does not contribute meaningfully to the algorithm performance in our experimentation. Similar findings were observed in the GAIL paper, where the authors disregarded the entropy coefficient for every tested environment, except for the Reacher environment.\\n\\n2) We added the performance of a random policy to the graph to be consistent with the original GAIL paper. We believe that it improves readability of the plot by providing necessary scaling.\\n\\n3) We already started working on additional experimentation as requested. We will update the manuscript as soon as we gather these results.\\n\\n4) We observed the same effect of having absorbing states in the Kuka arm tasks (Fig. 6), as in the MuJoCo environments. Also, we evaluated absorbing states within the AIRL framework for Walker-2D and Hopper environments (Fig. 7). We demonstrate that proper handling of absorbing states is critical for effectively imitating the expert policy. \\n\\nIn addition, we updated the paper to accommodate the minor suggestions proposed by the reviewer.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the feedback and appreciate the strong recommendation.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments.\\n\\n1. Yes, that\\u2019s correct (using TD3 algorithm). For the target part it\\u2019s s\\u2019 and action is produced by the action target network: ||logD(s_a,\\u30fb)-log(1-D(s_a,\\u30fb)) + \\u03b3Q_theta_target(s\\u2019, A_target(s\\u2019),\\u30fb)\\u3000-Q_theta(s, a,\\u30fb) ||**2.\\n2. We used zero actions for the absorbing states.\\n3. No, we investigated it only with off-policy case. For the off-policy version of your second question, see Figures 6 and 7. However, the part related to absorbing states is independent of off-policy training.\"}",
"{\"comment\": \"I enjoyed reading your submission, and I am now trying to add absorbing state to AIRL.\\nI have 3 questions. First and second questions are about how to learn Q_theta(s_a,\\u30fb) and third is about ablation study.\\n\\nThree questions are below.\\n\\n1. I think that the target of Q_theta(s_a, \\u30fb) is logD(s_a,\\u30fb)-log(1-D(s_a,\\u30fb)) + \\u03b3Q_theta(s_a,\\u30fb). Is this right?\\n\\n2. What did you use as action at absorbing states for calculating D(s_a,\\u30fb) or Q_theta(s_a,\\u30fb)? You use random value?\\n\\n3. Did you investigate the effect of only absorbing states on on-policy GAIL or AIRL ? Did GAIL+absorbing states or AIRL + absorbing states work better than GAIL or AIRL?\\n\\nThank you!!\", \"title\": \"Question about Details of Algorithm and ablation study\"}",
"{\"comment\": \"It doesn't seem that the reviewer has put any efforts in appreciating or criticising the paper and has merely summarised the paper in a few lines.\\nPlease provide proper analysis for your acceptance decision and rating\", \"title\": \"Question on expertise of the reviewer in this domain\"}",
"{\"title\": \"A Review on Adversarial Inference by Matching Priors and Conditionals Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning\", \"review\": \"The authors find 2 issues with Adversarial Imitation Learning-style algorithms: I) implicit bias in the reward functions and II) despite abilities of coping with little data, high interaction with the environment is required. The authors suggest \\\"Discriminator-Actor-Critic\\\" - an off-policy Reinforcement Learning reducing complexity up to 10 and being unbiased, hence very flexible.\\n\\nSeveral standard tasks, a robotic, and a VR task are used to show-case the effectiveness by a working implementation in TensorFlow Eager.\\n\\nThe paper is well written, and there is practically no criticism.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments!\\n\\nSince our algorithm uses TD3 (https://arxiv.org/pdf/1802.09477.pdf), we highly recommend to use the original implementation of the algorithm (https://github.com/sfujim/TD3). Our reimplementation of TD3 reproduces the results reported in the original paper. Reproducing results with SAC might be harder since SAC requires tuning a temperature hyperparameter that might require additional efforts in combination with reward learning.\\n\\n1) We used the batch size equal to 100. We kept all transitions in the replay buffer.\\n2) That\\u2019s correct. For HalfCheetah, after performing 1K updates of the discriminator we performed 1K updates of TD3. During early stage of development we tried the aforementioned suggestion of simultaneously updating the discriminator and policy, and it produced worse results.\\n3) Yes, we will include it in the appendix.\\n4) We used gradient penalty described in https://arxiv.org/abs/1704.00028 and implemented in TensorFlow https://www.tensorflow.org/api_docs/python/tf/contrib/gan/losses/wargs/wasserstein_gradient_penalty with a coefficient equal to 10.\\n\\nAn additional note regarding reproducing results. Please take into account, that depending on when you subsample trajectories to match the original GAIL setup, you need to use importance weights. Specifically, if you first subsample expert trajectories taking every Nth transition, and then add absorbing states to the subsampled trajectories, you will need to use importance weight 1/N for the expert absorbing states while training the discriminator. We will explicitly mention this detail in next version of the submission.\\n\\nWe would like to emphasize that upon publishing the paper we are going to open source our implementation.\\n\\nFeel free to request any additional information. We will be glad to provide everything to help you to reproduce our results.\"}",
"{\"comment\": [\"I enjoyed reading your submission and was trying to reproduce some of your results. I am using Soft Actor-Critic with somewhat different model sizes than yours and had some practical questions that could help me be more effective. I was wondering:\", \"What is the batch size you use for the updates? How large is your replay buffer size (do you store all previous trajectories)?\", \"In your algorithm box, in the update section, it says \\\"for i = 1, ..., |\\\\tau|\\\". Does this mean that for example for halfcheetah environment you do 1000 updates every time you generate a trajectory?\", \"Would you be able to also include the numeric reward value your experts achieve on the tasks?\", \"Could you elaborate on the specific form of gradient penalty you use and the coefficient of the gradient penalty term?\", \"And, one separate question: Have you also tried simultaneously updating the discriminator and policy instead of the alternating scheme shown in the algorithm box?\", \"Thank you!\"], \"title\": \"Practical Details for Reproducing Results\"}",
"{\"title\": \"Sound and effective approach with little novelty, insufficient analysis of reward bias\", \"review\": \"The paper suggests to use TD3 to compute an off-policy update instead of the TRPO/PPO updates in GAIL/AIRL in order to increase sample efficiency.\\nThe paper further discusses the problem of implicit step penalties and survival bias caused by absorbing states, when using the upper-bounded/lower-bounded reward functions log(D) and -(1-log(D)) respectively. To tackle these problem, the paper proposes to explicit add a unique absorbing state at the end of each trajectory, such that its rewards can be learned as well.\", \"pro\": \"The paper is well written and clearly presented. \\n\\nUsing a more sample efficient RL method for the policy update is sensible and turned out effective in the experiments.\\n\\nProperly handling simulator resets in MDPs is a well known problem in reinforcement learning that I think is insufficiently discussed in the context of IRL.\", \"cons\": \"The contributions seem rather small.\\na) Replacing the policy update is trivial, since the rl methods are used as black-box modules for the discussed AIL methods. \\n\\nb) Using importance weighting to reuse old trajectories for the discriminator update hardly counts as a contribution either--especially when the importance weights are simply omitted in practice. I also think that the reported problems due to the high variance have not been sufficiently investigated. There should be a better solution than just pretending that the replay buffer corresponds to roll-outs of the current policy. Would it maybe help to use self-normalized importance weights? The paper does also not analyze how such assumption/approximation affects the theoretical guarantees.\\n\\nc) The problem with absorbing states is in my opinion the most interesting contribution of the paper. However, the discussion is rather shallow and I do not think that the illustrative example is very convincing. Section 4.1.1. argues that for the given policy roll-out, the discriminator reward puts more reward on the policy trajectory than the expert trajectory. However, it is neither surprising nor problematic that the discriminator reward does not produce the desired behavior during learning. By assigning more cumulative reward for s2_a1->s1 than for s2_a2->g, the policy would (after a few more updates) choose the latter action much less frequently than with probability 0.5 and the corresponding reward would grow towards infinity until at some point Q(s2,a2) > Q(s2,a1)--when the policy would match the expert exactly. The illustrative example also uses more policy-labeled transitions than agent-labeled ones for learning the classifier, which may also be problematic. The paper further argues that a strictly positive reward function always rewards a policy for avoiding absorbing states, which I think is not true in general. A strictly positive reward function can still produce arbitrary large reward for any action that reaches an absorbing state. Hence, the immediate reward for choosing such action can be made larger than the discounted future reward when not ending the episode (for any gamma < 1). Even for state-only reward functions the problem does not persist when reseting the environment after reaching the absorbing state such that the training trajectories contain states that are only reached if the simulator gets reset. Hence, I am not convinced that adding a special absorbing state to the trajectory is necessary if the simulation reset is correctly implemented. This may be different for resets due to time limits that can not be predicted by the last state-action tuple. However, issues relating to time limits are not addressed in the paper. I also think that it is strange that the direct way of computing the return for the terminal state is much less stable than recursively computing it and think that the paper should include a convincing explanation.\\n\\n---------------\\nUpdate 21.11.2018\\n\\nI think my initial assessment was too positive. During the rebuttal, I noticed that the discussion of reward bias was not only shallow but also wrong in some aspects and very misleading, because problems arising from hacky implementations of some RL toolboxes were discussed as theoretical shortcoming of AIL algorithms. Hence, I think the initial submission should be clearly rejected. However, the authors submitted a revised version that presents the root of the observed problem much more accurately. I think that the revised version is substantially better than the original submission. However, I think that my initial rating is still valid (better: became valid), because the main issues that I raised for the initial submission still apply to the current revision, namely:\\n- The technical contributions are minor.\\n- The theoretical discussion (in particular regarding absorbing states) is quite shallow.\", \"the_merits_of_the_paper_are\": [\"Good results due to off-policy learning\", \"Raising awareness and providing a fix for a common pitfall\", \"I think that the problems arising from incorrectly treated absorbing states needs to be discussed more profoundly.\"], \"some_suggestions\": \"Section 3.1\\n\\\"As we discuss in detail in Section 4.2 [...]\\\"\\nI think this should refer to section 4.1. Also the discussion should in section 4.1 should be a bit more detailed. How do common implementations implicitly assign zero rewards? Which implementations are affected? Which papers published inferior results due to this bug? I think it is also important to note, that absorbing states are hidden from the algorithm and that the reward function is thus only applied to non-absorbing states.\\n\\n\\\"We will demonstrate empirically in Section 4.1 [...]\\\"\\nThe demonstration is currently missing. I think it would be nice to illustrate the problem on a simple example. The original example might actually work, as shown by the code example of the rebuttal, however the explanation was not convincing. Maybe it would be easier to argue with a simpler algorithm (e.g MaxEnt-IRL, potentially projecting the rewards to positive values)?\\n\\nSection 3.1 seems to focus too much on resets that are caused by time limits. Such resets are inherently different from terminal states such as falling down in locomotion tasks, because they can not be modelled with the given MDP formulation unless time is considered part of the state. Indeed, I think that for infinite horizon MDPs without time-awareness, time limits can not be modelled using absorbing states (I think the RL book misses to mention that time needs to be part of the state such that the policy remains Markovian, which is a bit misleading). Instead those resets are often handled by returning an estimate of the future return (bootstrapping). This treatment of time limits is already part of the TD3 implementation and as far as I understood not the focus of the paper. Instead section 3.1. should focus on resets caused by task failure/completion, which can actually be modelled with absorbing states, because the agent will always transition to the absorbing state when a terminal state is reached which is in line with Markovian dynamics.\\n\\nSection 4.2 should also add a few more details. Did I understand correctly, that when computing the return R_T the sum is indeed finite and stopped after a fixed horizon? If yes, this should be reflected in the equation, and the horizon should be mentioned in the paper. The paper should also better explain how the proposed fix enables the algorithm to learn the reward of the absorbing state. For example, section 4.2. does not even mention that the state s_a was added as part of the solution. \\n\\n\\n-------------\\nUpdate 22.11.2018\\nBy highlighting the difference between termination due to time-limits and termination due to task completion, and by better describing how the proposed fix addresses the problem of reward bias that is present in common AIL implementations, the newest revision further improves the submission. \\nI think that the submission can get accepted and I adapted my rating accordingly.\", \"minor\": \"Conclusion should also squeeze in somehow that the reward biases are caused by the implementations.\\nTypo in 4.2: \\\"Thus, when sample[sic] from the replay buffer AIL algorithms will be able to see absorbing states there[sic]\\nwere previous hidden, [...]\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper on the challenges of GAIL\", \"review\": \"This paper investigates two issues regarding Adversarial Imitation Learning. They identify a bias in commonly used reward functions and provide a solution to this. Furthermore they suggest to improve sample efficiency by introducing a off-policy algorithm dubbed \\\"Discriminator-Actor-Critic\\\". They key point here being that they propose a replay buffer to sample transitions from.\\n\\nIt is well written and easy to follow. The authors are able to position their work well into the existing literature and pointing the differences out.\", \"pros\": [\"Well written\", \"Motivation is clear\", \"Example on biased reward functions\", \"Experiments are carefully designed and thorough\"], \"cons\": [\"The analysis of the results in section 5.1 is a bit short\"], \"questions\": [\"You provide a pseudo code of you method in the appendix where you give the loss function. I assume this corresponds to Eq. 2. Did you omit the entropy penalty or did you not use that termin during learning?\", \"What's the point of plotting the reward of a random policy? It seems your using it as a lower bound making it zero. I think it would benefit the plots if you just mention it instead of plotting the line and having an extra legend\", \"In Fig. 4 you show results for DAC, TRPO, and PPO for the HalfCheetah environment in 25M steps. Could you also provide this for the remaining environments?\", \"Is it possible to show results of the effect of absorbing states on the Mujoco environments?\"], \"minor_suggestions\": \"In Eq. (1) it is not clear what is meant by pi_E. From context we can assume that E stands for expert policy. Maybe add that. Figures 1 and 2 are not referenced in the text and their respective caption is very short. Please reference them accordingly and maybe add a bit of information. In section 4.1.1 you reference figure 4.1 but i think your talking about figure 3.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"RE: Another paper with off policy imitation learning\", \"comment\": \"Thank you for sharing the link. The arxiv paper linked is concurrent work. As such our off-policy algorithm was novel at time of release and remains a primary contribution of this work. We will add this paper to the related work section as a concurrent work in the next update.\\n\\nThe requested ablation study is already presented in Fig. 6 and Fig.7 where we compare adversarial imitation learning approaches with and without the absorbing states. Due to the bias present in the original reward, the baseline without absorbing state information fails to learn a good policy. We derive why this happens in Section 4.1. \\n\\nAlso, we would like to emphasize that our paper is not limited to off-policy training but also addresses other issues of adversarial imitation learning algorithms. We first identify the problem of biased rewards, which we then experimentally validate across GAIL and AIRL (note that the other paper is centered around GAIL, and not adversarial imitation learning in general). Following that we introduce absorbing states as a fix for this issue, while empirically validating that our proposed solution solves tasks which are unsolvable by AIRL.\"}",
"{\"title\": \"RE: Response\", \"comment\": \"Thank you again for your comments.\\n\\n1. We did not have sufficient time to collate these results before the deadline, but we will add them to the appendix for a future revision.\\n\\n2. In Fig. 7, we run the absorbing state versus non-absorbing state experiments on the more standard Hopper and Walker2D environments. We understand those experiments are with AIRL algorithm and it will be more comprehensive if we ran the same experiment with GAIL algorithm and environments from Fig. 4. However, we were constrained by the page limits and chose to show how our fix to the reward bias not only works across different adversarial algorithms (GAIL in Fig. 6 and AIRL in Fig. 7) but also works on demonstrations collected from humans on a Kuka arm. We will add the figures for the experiments you mentioned in the comment to the next version of the paper.\"}",
"{\"comment\": \"I appreciate the authors' response to the comments, and it did address some of my concerns. However, I still have some questions:\\n\\n1. Could the authors provide the comparisons among DAC, GAIL w/ PPO, and GAIL w/ TRPO for 25M steps for all the 5 tasks (in Fig.4)?\\n\\n2. Why the authors only evaluate such no absorbing experiments on KUKA tasks? Could the authors provide the results of this baseline on the 5 tasks used in Fig.4?\", \"title\": \"Response\"}",
"{\"comment\": \"There is another paper which has also combined off-policy training with imitation learning.\\nThe only significant contribution of this paper then seems to be unbiased rewards. \\nI think the authors should provide more rigorous analysis of what exact effects the absorbing state introduces.\", \"https\": \"//arxiv.org/pdf/1809.02064.pdf\", \"title\": \"Another paper with off policy imitation learning\"}",
"{\"title\": \"Original GAIL results and ablation experiments\", \"comment\": \"Thank you for your comments.\\n\\nAt the moment, we plot results only for 1 million steps. In the original implementation of GAIL, the authors use 25M steps to report the results. With 25M steps we are able to replicate results reported in the original GAIL paper. We do have one example of how the methods compare to each other when trained for 25M steps in our submission. This can be seen in the top left sub-plot in Figure 4. We will add the plots with 25M steps in the next update of the paper.\\n\\nWe perform ablation experiments and visualize the results in Figure 6. The \\u2018no absorbing\\u2019 baseline corresponds to off-policy GAIL while the red line corresponds to DAC. Thanks for pointing this out. We will add a clarification in the text to make the comparison clearer.\"}",
"{\"comment\": \"I found there is a significant gap between the performances of GAIL reported by the authors and stated in the original GAIL paper(https://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning-supplemental.zip). Since the authors emphasized that they use the original implementation(https://www.github.com/openai/imitation), such empirical results could be doubtful. Can the authors comment on that?\\n\\nAnother comment is about the sufficiency on experiments. Since DAC is a combination of an improved adversarial reward learning mechanism and off-policy training, evaluations on ablations are needed to clarify which part actually accounts for the improvement on performance or training efficiency. Moreover, I think GAIL with off-policy training should also be a baseline to further validate that whether the unbiased reward learning introduced by the authors could eliminate the sub-optimality.\", \"title\": \"Some comments on the persuasion and sufficiency of experiments\"}"
]
} |
|
Syfz6sC9tQ | Generative Feature Matching Networks | [
"Cicero Nogueira dos Santos",
"Inkit Padhi",
"Pierre Dognin",
"Youssef Mroueh"
] | We propose a non-adversarial feature matching-based approach to train generative models. Our approach, Generative Feature Matching Networks (GFMN), leverages pretrained neural networks such as autoencoders and ConvNet classifiers to perform feature extraction. We perform an extensive number of experiments with different challenging datasets, including ImageNet. Our experimental results demonstrate that, due to the expressiveness of the features from pretrained ImageNet classifiers, even by just matching first order statistics, our approach can achieve state-of-the-art results for challenging benchmarks such as CIFAR10 and STL10. | [
"Generative Deep Neural Networks",
"Feature Matching",
"Maximum Mean Discrepancy",
"Generative Adversarial Networks"
] | https://openreview.net/pdf?id=Syfz6sC9tQ | https://openreview.net/forum?id=Syfz6sC9tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgVyDYNlE",
"B1ghAMebxV",
"S1eBhca0JV",
"BkefMVTAkN",
"HyeRBO3RJE",
"ByeVlu2AyV",
"r1liB12A1V",
"B1lFdFqHkN",
"S1grfUFBJN",
"SyefTUDBkN",
"H1eUWZyryN",
"HklWGPEQJE",
"S1eBdGFMkN",
"HketY5fDRQ",
"SJg3r9MwC7",
"Syx0-9GwAQ",
"SJg0BnM7R7",
"rkeSiI7z07",
"SyeV-VyzAX",
"ByxhmrsOaX",
"HygWkrouTQ",
"HkeAiVjdpX",
"H1lk0MjOaX",
"Hyx75Mo_pX",
"Hkln8-idpX",
"S1gDNyd0hm",
"HylSrUITnm",
"rkx9_6p42Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545012956394,
1544778451592,
1544637101234,
1544635401712,
1544632389520,
1544632300022,
1544630083329,
1544034673138,
1544029708519,
1544021689796,
1543987453847,
1543878409008,
1543832173304,
1543084673384,
1543084611922,
1543084549694,
1542822982221,
1542760092825,
1542743035684,
1542137124084,
1542137049160,
1542136998227,
1542136519092,
1542136459279,
1542136147795,
1541467950600,
1541396029387,
1540836722300
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper783/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"~Suman_Ravuri1"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/Authors"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper783/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a method of training implicit generative models based on moment matching in the feature spaces of pre-trained feature extractors, derived from autoencoders or classifiers. The authors also propose a trick for tracking the moving averages by appealing to the Adam optimizer and deriving updates based on the implied loss function of a moving average update.\\n\\nIt was generally agreed that the paper was well written and easy to follow, that empirical results were good, but that the novelty is relatively low. Generative models have been built out of pre-trained classifiers before (e.g. generative plug & play networks), feature matching losses for generator networks have been proposed before (e.g. Salimans et al, 2016). The contribution here is mainly the extensive empirical analysis plus the AMA trick.\\n\\nAfter receiving exclusively confidence score 3 reviews, I sought the opinion of a 4th reviewer, an expert on GANs and GAN-like generative models. Their remaining sticking points, after a rapid rebuttal, are with possible degeneracies in the loss function and class-level information leakage from pre-trained classifiers, making these results are not properly \\\"unconditional\\\". The authors rebutted this by suggesting that unlike Salimans et al (2016), there is no signal backpropagated from the label layer, but I find this particularly unconvincing: the objective in that work maximizes a \\\"none-of-the-above\\\" class (and thus minimizes *all* classes). The gradient backpropagated to the generator is uninformative about which particular class a sample should imitate, but the features learned by the discriminator needing to discriminate between classes shape those gradients in a particular way all the same, and the result is samples that look like distinct CIFAR classes. In the same way, the gradients used to train GFMN are \\\"shaped\\\" by particular class-discriminative features when trained against a classifier feature extractor.\\n\\nFrom my own perspective, while there is no theory presented to support why this method is a good idea (why matching arbitrary features unconnected with the generative objective should lead to good results), the idea of optimizing a moment matching objective in classifier feature space is rather obvious, and it is unsurprising that with enough \\\"elbow grease\\\" it can be made to work. The Adam moving average trick is interesting but a deeper analysis and ablation of why this works would have helped convince the reader that it is principled. \\n\\nThis paper was very much on the borderline. Aside from quibbles over the fairness of comparisons above, I was forced to ask myself whether I could imagine that this would be a widely read, influential, and frequently cited piece of work. I believe that the carefully done empirical investigation has its merits, but that the core ideas are rather obvious and the added novelty of a poorly understood stabilized moving average is not enough to warrant acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"A well-executed piece of empirical work with unfortunately little to offer in the way of novel ideas.\"}",
"{\"title\": \"Change in review\", \"comment\": \"After careful read authors' response, I change score from 5 to 6. I think this paper has the value in generative models without adversarial training, but in contribution perspective, I don't agree with the authors' comment \\\"Please note that all MMD methods are similar in the sense that they all perform moment matching. Using the same line of argumentation, you would also argue that all papers presenting new GANs are very similar to each other because all of them use the same adversarial strategy.\\\" Using pretrained neural networks as a kernel function as a key contribution is not strong enough to convince me in novelty perspective. It may be new in a specific small field, but in general, this methodology is not novel.\\n\\nI think the two tricks (pretrained model + AMA) has its value, but I am not sure whether it will bring inspiration to the all generative model community,\"}",
"{\"title\": \"Reply to \\\"No change in assessment\\\"\", \"comment\": \"Dear Reviewer,\\n\\nCould you please elaborate about the basis for rating our paper with 6? Your review only mentions positive things about the paper and points out one typo (which was corrected), and do not provide any reasons and justifications for giving a low score of 6 (weak accept). What do you think would have been acceptable for you to reach a rating 7 or higher?\\n\\nOur paper presents a solid work backed by strong experimental results, lots of ablation studies and discussions. From your review, it is hard to know what would be missing in our paper that would change your point of view.\\n\\nBest, \\nthe authors.\"}",
"{\"title\": \"No change in assessment\", \"comment\": \"Thanks for updating the paper. Considering this version, my review still stays the same as a Weak Accept.\"}",
"{\"title\": \"No feedback after paper revision.\", \"comment\": \"Dear reviewer,\\n\\nWe have incorporated your typo corrections in our revision of the paper, along with some additional discussions, ablation experiments and more details about Adam Moving average based on other reviewers' feedback. Could you please let us know if this helped you in better assessing our paper and in influencing your rating of the paper?\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"No feedback after paper revision.\", \"comment\": \"Dear Reviewer, \\n\\nWe have incorporated your feedback in our revision of the paper and added detailed clarifications for all your questions/comments in our reply. Could you please let us know if this helped you in better assessing our paper and in influencing your rating of the paper?\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Reply to \\\"No change in the review\\\"\", \"comment\": \"Dear reviewer,\\n\\nCould you please elaborate about the basis for rating our paper with 6? Your review only mentions positive things about the paper and points out some typos (which were corrected), and do not provide any reasons and justifications for giving a low score of 6 (weak accept). What do you think would have been acceptable for you to reach a rating 7 or higher ?\\n\\nOur paper presents a solid work backed by strong experimental results, lots of ablation studies and discussions. From your review, it is hard to know what would be missing in our paper that would change your point of view.\\n\\nBest, \\nthe authors.\"}",
"{\"title\": \"Thanks for the reply.\", \"comment\": \"Thanks for your prompt answer and for acknowledging that. If there is any additional clarification that we can give in order to further improve your rating of the paper, please let us know.\"}",
"{\"title\": \"-\", \"comment\": \"> (2) Our paper already contains ablation experiments where we show the performance of generators that were trained with a different number of layers employed to feature matching.\\n\\nIndeed -- sorry, I'd forgotten about these results.\"}",
"{\"title\": \"Further clarifications to AnonReviewer4\", \"comment\": \"We would like to thank the reviewer for considering our response and updating your review. We really appreciate your feedback.\", \"regarding_your_remaining_concerns\": \"(2) Our paper already contains ablation experiments where we show the performance of generators that were trained with a different number of layers employed to feature matching.\\u00a0 Please check our experimental results in Table 2 (pg. 7) , Figs 2.d, 2.e, 2.f (pg. 6) and Appendix A.7 (Fig. 8, pg. 17). These ablation experiments are a clear evidence that training with just part of the features is sub-optimal, the best performance is\\u00a0 achieved when all the features are used. Training with the features of one single layer (independent of the layer) leads to very poor results. Although we present results for VGG19/Resnet18 classifiers only, the same behavior is true for the case of encoder feature extractor.\\n\\n(6) We agree on the difficulty of a comparison between GAN with learned discriminator and Imagenet based feature extractors. However, please note that we did additional experiments where we evaluated if WGAN-GP could benefit from initializing the discriminator with DCNN classifiers pretrained on ImageNet. This was the only strategy that we could come up with to somehow feed GANs with the same ImageNet feature extractors that GFMN uses. In Appendix A.8 (pgs. 17/18) we present experimental results and discussions regarding this experiment. Although we tried different hyperparameter combinations, we were not able to successfully train WGAN-GP with VGG19 or Resnet18 discriminators. It seems that the discriminator being pretrained on ImageNet, can quickly learn to distinguish between real and fake images. This limits the reliability of the gradient information from the discriminator, which in turn renders the training of a proper generator extremely challenging.\\n\\nAgain, we believe that our use of ImageNet classifiers is fair game and our paper should not be penalized because of this. GFMN's good performance for CIFAR10 and CelebA is a good demonstration that it excels even with cross-domain feature extractors.\"}",
"{\"title\": \"-\", \"comment\": \"Thank you for the detailed responses and additional experiments. I have updated my review above.\"}",
"{\"title\": \"Reply to \\\"Interesting Paper\\\"\", \"comment\": \"Hi Suman Ravuri,\\u00a0\\n\\u00a0\\nThank you for the positive comments and suggestions.\\u00a0 We are glad that you think the paper is \\u201cthought-provoking\\u201d, this was indeed one of the intentions of the work.\\n\\u00a0\\nRegarding your suggestion of \\u201creplacing AMA with a simple moving average of means/variances and dividing each feature by the MA of the standard deviation\\u201d, we just tried that, but the results are quite similar to the ones of simple MA.\\n\\nNote that in Adam moving average the variance terms are not on the features, but they are on the gradient of the least square loss between v_j and the difference of means Delta_j= mean_real-mean_fake (Eq. 4). This gradient is \\\"v_j-Delta_j\\\".\\u00a0 The improvements are mainly due the stability of the optimization that adam offers in the non-stationary setting (Delta_j changes as we update the generator). Also note that our goal is not to get a better estimate of each feature mean (real and fake) alone, we would like to get a better estimate of mean differences (mean_real-mean_fake). This is arguably better, since the fake mean is non stationary and if we have errors from single means estimation this may add up in their difference.\\u00a0\\u00a0\\u00a0\\n\\nThank you for the clarification regarding the training of your models. We will adjust our text as soon as we have the opportunity to upload a new version of the paper.\\u00a0 We have trained our models using K80/40 GPUs which contains less memory than V/P100. That\\u2019s why we thought your model (which uses significantly more moments) would need additional resources than what we have used.\\n\\u00a0\\nThank you for pointing out the typo, it will be corrected in the new version of the text.\"}",
"{\"comment\": \"This was a thought-provoking paper, and it was pretty surprising that the authors could train a generator using relatively few features (< 1M). One important part of the algorithm seems to be the ADAM moving average, which the authors claim is more stable that the simple moving average (MA). Perhaps more important than the stability is that ADAM moving averages are divided by the square root of the second moment. I think that the scaling is probably what makes AMA work much better than MA. It would be interesting to see if replacing AMA with a simple moving average of means/variances and dividing each feature by the MA of the standard deviation could achieve to same effect.\\n\\nAlso, to clear up a minor misunderstanding -- that \\\"MoLM-1536 can be used in large-scale environments only, while GFMN can be used in single GPU environments.\\\" --, all of our MoLM experiments used a single GPU (though that GPU was a Tesla V/P100).\\n\\nAlso, minor typo: pg. 7 \\\"Yhere is a large boost\\\" -> \\\"There is a large boost\\\"\", \"title\": \"Interesting Paper\"}",
"{\"title\": \"Reply to AnonReviewer4 (Part 1 / 3):\", \"comment\": \"We would like to thank the reviewer for the questions and comments. \\nIn order to better address your concerns, we have uploaded a new version of the paper that contains three new appendices: A.11, A.12 and A.13. Please see below our detailed reply for your questions/comments. We believe that the new added appendices and the clarifications given in our reply will address all the concerns and misconceptions of the reviewer. \\n\\n1) About the loss function and novelty:\", \"authors\": \"Please note that our loss does not perform matching using a single feature map, it uses many layers to do feature matching, and this is crucial to prevent such degeneracy. As we are matching effectively on f_1... f_m where m are are features extracted on different layers, we are getting matching on different scales of the generated image. Hence the claim of the reviewer that nothing prevents degeneracy in the objective is not true, and this is not an artifact of the features, it is due to the multi-scale nature of the features! This multi-feature matching on multiple scales regularizes the learning of the generator. We quote here [1]: \\\" representations obtained across the layers of a CNN increasingly capture the statistical properties of natural images, producing impressive texture synthesis results\\\".\\n\\nNote that this multi-features matching has been exploited also in end to end style transfer, or super-resolution [1], but it is novel in generative modeling context . In [1] for instance, the authors show that by matching CNN layers of a pretrained network one can do super-resolution, we push these observation further by showing that those multi scale features are sufficient statistics also for generation. Moreover our empirical results on generation from those deep priors from pretrained features complements theoretical results in [2], that gives theoretical guarantees for signal recovery from features of deep networks in inverse problems. \\n\\n[1] Bruna et al. Super-Resolution with Deep Convolutional Sufficient Statistics. ICLR 2016, https://arxiv.org/pdf/1511.05666.pdf\\n[2]Global Guarantees for enforcing deep generative priors by empirical risk minimization, Hand et al.\", \"https\": \"//arxiv.org/pdf/1705.07576.pdf\"}",
"{\"title\": \"Reply to AnonReviewer4 (Part 2 / 3):\", \"comment\": \"3) Is Adam moving average biased?\", \"authors\": \"We have added Appendix A.11 \\u201cSampling from the Pretrained Decoder vs. Sampling from GFMN models\\u201d where we present a comparison between images generated by sampling from the decoder (with no additional training from GFMN) and images generated by GFMN. We can see in Fig 12 that, as expected, when sampling directly from decoders we obtain completely noisy images for the three datasets experimented: CelebA, CIFAR10 and STL10. Moreover, as you mentioned, our results without G initialization, 7.67 / 23.5 (IS/FID), are quite close to the results that use G initialization, 7.99 / 23.1 (IS/FID), which demonstrates that G initialization is responsible for a very small fraction of the result. Therefore, we can definitively rule out the reviwer's concern that \\u201cmost of the result relies on *autoencoder* training rather than feature matching\\u201d. Generator initialization is an interesting positive feature of GFMN, but is far from being the essential component of the method.\\n \\nRegarding learning rate, please note that in the paper we state: \\u201cWhen using features from ImageNet classifiers, we set the learning rate to 1*10^-4\\u201d. This larger learning rate is used for both cases: G initialized, and G not initialized. Therefore, we use the learning rate 1*10^-4 for training the models that produce our best results.\\n \\nThe difference of 0.05 in Inception Score for the result pointed by the reviewer, 7.67 (ours) vs 7.72 (Warde-Farley & Bengio, 2017) is not statistically significant and should not be used as a demerit for our paper.\\nOur results are at the same level as the state-of-the-art GANs and other MMD methods. In fact, our quantitative results (including our models without G initialization) are way better than recent MMD GAN methods (as you can See in Table 3).\", \"the_reasoning_of_the_reviewer_is_flawed_regarding_no_need_of_a_large_minibatch_size\": \"\\\"... but this is not true: one could trivially compute the full exact dataset mean of these fixed features by accumulating a sum over the dataset (e.g., one image a time, with minibatch size 1) and then dividing the result by the number of images in the dataset\\\". Your reasoning is ignoring a fundamental problem that is, during training, one needs to also compute the mean feature of the generator (fake) data, compute the loss and back propagate the errors back to the generator. In order to perform the backpropagation, we need to keep in memory all the generated images and all the information about its forward step (for both G and the feature extractor), hence of course there is a memory problem.\\n\\nIn summary, as we show in appendix A.12 Figure 13, yes one can have an estimate of the full mean of the real dataset and use the adam moving average on : \\\"mean_fake - gm\\\" (gm =global average of real ) and GFMN still succeeds, which confirms that we are effectively optimizing the feature matching cost function, and this rules out the reviewer's concerns. \\n\\n\\n5) On sampling from decoder vs. sampling from GFMN; G initialization and learning rate:\"}",
"{\"title\": \"Reply to AnonReviewer4 (Part 3 / 3):\", \"comment\": \"6) On ImageNet vs. CIFAR10; conditional generation; IS/FID metrics and experimental results:\", \"authors\": \"The reviewer's concerns are based on misconceptions. Below we list some facts that will help to rule out the reviewer's concerns:\\n\\n(a) the label overlap between ImageNet and CIFAR10 is not direct at all. ImageNet uses fine grained labels, while CIFAR10 uses 10 labels only. For instance, while ImageNet has dozens of classes for dogs (one for each breed), CIFAR10 groups all dog images in one single class (see [3], Appendix F). Labeling the data in different ways produce quite different classification problems. Moreover, the images in CIFAR10 are not a subset of ImageNet since they have different resolutions. It is completely safe to say that CIFAR10 is an out-of-domain dataset with regard to ImageNet;\\n\\n(b) the reviewer is ignoring the fact that we also have good results for an extreme case of out-of-domain dataset: CelebA. There is no overlap between ImageNet and CelebA and the images from these two datasets are quite different. Nevertheless, the VGG19 classifier produces much better results than the autoencoder pretrained in CelebA. In the new Appendix 13, Fig 14, we show additional results that support this fact. These results are clear evidence that VGG19 classifiers is just a much better feature extractor than autoencoders, which is also the real reason for our boost in performance for CIFAR10 (and not \\\"label overlapping\\\"). There is a long literature (some cited in our paper) that show the effectiveness of VGG19 ImageNet classifier as feature extractor;\\n\\n(c) ImageNet classifiers, and in special VGG classifier, is the default choice for feature extraction in computer vision tasks. We are sure that all the reviewers would have complained about our work if we have not used ImageNet-based feature extractors;\\n\\n(d) our use of VGG19/Resnet18 ImageNet classifiers have no impact in the used metrics for different reasons: (d.1) IS and FID are computed using the default tensorflow Inception model trained with images of size 299x299, while the classifier ( the feature extractor ) that we use in our experiments was trained using images of size 32x32, as informed in Appendix A.3. Our ImageNet VGG19 classifier has top-1 accuracy of 29.14% while the Inception net has about top-1 accuracy of 79%. In summary, our classifier is completely different in many crucial aspects when compared to the Inception classifier used to compute IS and IFD; (d.2) we use the classifier as a feature extractor only, no log likelihoods from the classifier is used in our objective function;\\n\\n(e) we do not perform conditional generation. GAN-based methods that perform conditional generation use direct feedback from the labels in the form of log likelihoods from the discriminator (using the k+1 trick from Salimans et. al 2016) or from an auxiliary classifier. In the contrary, our generator is trained with a loss function that performs feature matching only, there is no feedback in form of a log likelihood from the labeled data. Our generator is agnostic to the labels (there is no one-hots concatenated to the noise) and it is only distilling the knowledge of the multi-layer feature space, without explicitly taking advantage of any labeling that went to the training to this feature space;\\n\\n(f) ProGAN (Karras et al., 2017) uses a generator architecture that contains residual connections and is deeper and more complex than the DCGAN-like architecture that we use in our CIFAR10 experiments. Their better performance is due mainly to the generator's architecture and the tricks used to train such an architecture. It is unfair to compare our results with the ones of a method that uses a bigger and more complex generator. We were careful and fair in trying to put in Table 3 the results for very recent and relevant work that use generator architecture that are similar to ours. By the way, the trick of progressively growing the generator can also be applied in GFMN and would likely increase our performance. But this experiment is completely out of the scope of the current paper;\\n\\n[3] Oliver et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms. ArXiv 2018. https://arxiv.org/pdf/1804.09170.pdf\"}",
"{\"title\": \"-\", \"review\": \"This paper proposes to learn implicit generative models by a feature matching objective which forces the generator to produce samples that match the means of the data distribution in some fixed feature space, focusing on image generation and feature spaces given by pre-trained image classifiers.\\n\\nOn the positive side, the paper is well-written and easy to follow, the experiments are clearly described, and the evaluation shows the method can achieve good results on a few datasets. The method is nice in that, unlike GANs and the stability issues that come with them, it minimizes a single loss and requires only a single module, the generator.\\n\\nOn the other hand, the general applicability of the method is unclear, the novelty is somewhat limited, and the evaluation is missing a few important baselines. In detail:\\n\\n1) The proposed objective was used as a GAN auxiliary objective in [Salimans et al., 2016] and further explored in [Warde-Farley & Bengio, 2017]. The novel bit here is that the proposed objective doesn\\u2019t include the standard GAN term (so no need for an adversarially-optimized discriminator), and the feature extractor is a fixed pre-trained classifier or encoder from an auto-encoder (rather than a discriminator). \\n\\n2) The method only forces the generator\\u2019s sample distribution to match the first moment (the mean) of the data distribution. While the paper shows that this can result in a generator that produces reasonably good samples in practice, it seems like this may have happened due to a \\u201clucky\\u201d artifact of the chosen pre-trained feature extractors. For example, a degenerate generator that produces a single image whose features exactly match the mean would be a global optimum under this objective, equally good as a generator that exactly matches the data distribution. Perhaps no such image exists for the chosen pre-trained classifiers, but it\\u2019s nonetheless concerning that the objective does nothing to prevent this type of behavior in the general case. (This is similar to the mode collapse problem that often occurs with GAN training in practice, but at least a GAN generator is required to exactly match the full data distribution to achieve the global optimum of that objective.)\\n\\n3) It\\u2019s unclear why the proposed ADAM-based Moving Average (AMA) updates are appropriate for estimate the mean features of the data distribution. Namely, unlike EMA updates, it\\u2019s not clear that this is an unbiased estimator (I suspect it\\u2019s not); i.e. that the expectation of the resulting estimates is actually the true mean of the dataset features. It\\u2019s therefore not clear whether the stated objective is actually what\\u2019s being optimized when these AMA updates are used.\\n\\n4) Related to (3), an important baseline which is not discussed is the true fixed mean of the dataset distribution. In Sec. 2.4 (on AMA) it\\u2019s claimed that \\u201cone would need large mini-batches for generating a good estimate of the mean features...this can easily result in memory issues\\u201d, but this is not true: one could trivially compute the full exact dataset mean of these fixed features by accumulating a sum over the dataset (e.g., one image a time, with minibatch size 1) and then dividing the result by the number of images in the dataset. Without this baseline, I can\\u2019t rule out that the method only works due to its reliance on the stochasticity of the dataset mean estimates to avoid the behavior described in (2), or even the fact that the estimates are biased due to the use of ADAM as described in (3).\\n\\n5) The best results in Table 3 rely on initializing G with the weights of a decoder pretrained for autoencoding. However, the performance of the decoder itself with no additional training from the GFMN objective is not reported, so it\\u2019s possible that most of the result relies on *autoencoder* training rather than feature matching to get a good generator. This explanation seems especially plausible due to the fact that the learning rate is set to a miniscule value (5*10^-6 for ADAM, 1-2 orders of magnitude smaller than typical values). Without the generator pretraining, the next best CIFAR result is an Inception Score of 7.67, lower than the unsupervised result from [Warde-Farley & Bengio, 2017] of 7.72.\\n\\n6) It is misleading to call the results based on ImageNet-pretrained models \\u201cunconditional\\u201d -- there is plenty of overlap in the supervision provided by the labeled images of the much larger ImageNet to CIFAR and other datasets explored here. This is especially true given that the reported metrics (Inception Score and FID) are themselves based on ImageNet-pretrained classifiers. If the results were instead compared to prior work on conditional generation (e.g. ProGAN (Karras et al., 2017), which reports CIFAR IS of 8.56), there would be a clear gap between these results and the state of the art.\\n\\nOverall, the current version of the paper needs additional experiments and clarifying discussion to address these issues.\\n\\n=======================================\\n\\nREVISION\\n\\nBased on the authors' responses, I withdraw points 3-5 from my original review. Thanks to the authors for the additional experiments. On (3), I indeed misunderstood where the moving average was being applied; thanks for the correction. On (4), the additional experiment using the global mean features for real data convinces me that the method does not rely on the stochasticity of the estimates. (Though, given that the global mean works just as well, it seems like it would be more efficient and arguably cleaner to simply have that be the main method. But this isn't a major issue.) On (5), I misread the learning rate specified for \\\"using the autoencoder features\\\" as being the learning rate for autoencoder *pretraining*; thanks for the correction. The added results in Appendix 11 do show that the pretrained decoder on its own does not produce good samples.\\n\\nMy biggest remaining concerns are with points (2) and (6) from my original review.\\n\\nOn (2), I did realize that features from multiple layers are used, but this doesn't theoretically prevent the generator from achieving the global minimum of the objective by producing a single image whose features are the mean of the features in the dataset. That being said, the paper shows that this doesn't tend to happen in practice with existing classifiers, which is an interesting empirical contribution. (It would be nice to also see ablation studies on this point, showing the results of training against features from single layers across the network.)\\n\\nOn (6), I'm still unconvinced that making use of ImageNet classifiers isn't providing something like a conditional training signal, and that using such classifiers isn't a bit of an \\\"unfair advantage\\\" vs. other methods when the metrics themselves are based on an ImageNet classifier. I realize that ImageNet and CIFAR have different label sets, but most if not all of the CIFAR classes are nonetheless represented -- in a finer-grained way -- in ImageNet. If ImageNet and CIFAR were really completely unrelated, an ImageNet classifier could not be used as an evaluation metric for CIFAR generators. (And yes, I saw the CelebA results, but for this dataset there's no quantitative comparison with prior work, and qualitatively, if the results are as good as or better than the 3 year old DCGAN results, I can't tell.)\\n\\nOn the other hand, given that the approach relies on these classifiers, I don't have a good suggestion for how to control for this and make the comparison with prior work completely fair. Still, it would be nice to see acknowledgment and discussion of this caveat in a future revision of the paper.\\n\\nOverall, given that most of my concerns have been addressed with additional experiments and clarification, and that the paper is well-written and has some interesting results from its relatively simple approach, I've raised my rating to above acceptance threshold.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply to \\\"Concerning the theoretical guarantees\\\"\", \"comment\": \"Thank you for your interest in our work.\\n\\u00a0\\nWe agree with you that one can use the tensor product or sum of a RBF kernel features and\\u00a0 the pretrained CNN features. However, while this will give some theoretical guarantees as studied in Li et al (2017) [MMD GAN], this will result in an unpractical algorithm since the dimensionality of our feature space is very big (hundreds of thousands of features, which will render the tensor product\\u00a0 even more expensive).\\u00a0\\nOne can also compose the pretrained features\\u00a0 with RBF kernels as\\u00a0 in Li et al (2017) [MMD GAN]\\u00a0 and Li et al (2015) [GMMN] but this will be also computationally expensive (because of the large feature space of pretrained networks) and will need also large mini batches to estimate the MMD, and we loose all appealing practical advantages of GFMN. The goal of the paper is to show that using pretrained features with a linear kernel solely is enough for training state-of-the-art generators. In other words, we show that the pretrained deep CNN features provide sufficient statistics for image generation.\\n\\u00a0\\nMoreover, please note that one of the most interesting properties of our approach is that it circumvents the cumbersome min/max game in adversarial methods. If we use the (adversarial) MMD GAN loss as you suggested, we would have to train under the problematic min/max game. Additionally, as you can see in Table 3, our Inception Score for CIFAR10 is significantly better than the Inception Score from Li et al (2017) [MMD GAN], which shows that, in practice, our proposed approach is very effective.\"}",
"{\"comment\": \"It seems the method proposed in the paper is to use the features of a pretrained network to build a kernel for moment matching. Of course, these features being finite dimensional, the associated kernel is not necessarily universal and therefore can't provide any convergence guarantee for the method.\\nFortunately, this might be fixable by doing a Kronecker sum of these features and, for example, features of a RBF kernel (or another characteristic kernel). This translate in practice by a simple sum of kernels, which means using something like the MMD GAN loss as a regularizer.\", \"title\": \"Concerning the theoretical guarantees\"}",
"{\"title\": \"Answer to AnonReviewer1 (Part 1/3)\", \"comment\": \"We would like to thank the reviewer for the detailed questions, comments and suggestions. We believe they have helped us improve the quality of the paper. We have done substantial changes in the text in order to address your questions/comments/suggestions.\\nIn the post destined to all reviewers, we give a detailed description of the main changes in the new version of the paper. We kindly ask the reviewer to take a look at the post as well as the new version of the paper.\\nPlease see below our answers for your questions/comments.\\n\\u00a0\\nRegarding contribution (1):\", \"rev\": \"\\u201cThe proposed loss (the same as (Salimans et al., 2016)) only try to matching first-order momentum. So I assume it is insensitive to higher-order statistics. Does it less successful at producing samples with high visual fidelity?\\u201d\\n\\u00a0\\nIt is not quite correct to say that our proposed approach is insensitive to higher-order statistics. Note that using a pretrained DCNN to extract features is equivalent to use a highly non-linear kernel function to map the data into a very high dimensional space (hundreds of thousands of dimensions in our case). What we show in our experiments is that, in this very high dimensional space, matching first order statistics is already enough to achieve state-of-the-art results. We demonstrate empirically that our strategy is more efficient and effective than methods such as GMMN, which use a Gaussian kernel to match all the moments. The problem with using a Gaussian kernel is that it requires a very large minibatch size in order to produce good estimates, which is not feasible in practice (Li et al., 2017).\\u00a0\\n\\u00a0\\nRegarding visual fidelity, our method produces images that are better or on par with other systems that use similar generator architectures, similar computational resources and do not use conditional generation. For instance, if you compare the quality of the (unconditional) generated images using ImageNet Dogs, you will see that our results are much better than (Salimans et. al, 2016) and (Zhao et al., 2017). It is not fair to compare our results with large-scale experiments or with systems that explicitly model conditional generation.\", \"compared_to_other_mmd_approaches\": \"(2) we present far better quantitative results than GMMN (Li et al., 2015). Additionally, the new Appendix A.10 shows a visual comparison between GMMN and GFMN results. The main reason why GFMN results are significantly better than GMMN is because GFMN uses a strong, robust kernel function (a pretrained DCNN), which, together with our AMA trick, allows a stable and effective training with small minibatches. On the other hand, the Gaussian kernel used in GMMN requires a very large minibatch size in order to work well, which is impractical due to memory limitations and computational cost;\\n\\u00a0\\n(3) Compared to recent adversarial MMD methods (Li et al., 2017; Bikowski et al., 2018) GFMN presents significantly better results while avoiding the problematic min/max game;\\n\\u00a0\\n(4) GFMN achieves similar results to the Method of Learned Moments (MoLM) (Ravuri et al., 2018), while using 50x less moments/features to perform matching. In other words, while MoLM can be used in large-scale environments only, GFMN can be used in single GPU environments achieving the same or better performance.\"}",
"{\"title\": \"Answer to AnonReviewer1 (Part 2/3)\", \"comment\": \"Regarding contribution (2):\", \"rev\": \"\\u201cAppendix A.8 is very interesting / important to apply pre-trained network in GAN framework. However, it only say failed to train without any explanation.\\u201d\\n\\u00a0\\nWe have expanded Appendix A.8 with additional details/explanations on the reasons why WGAN-GP fails when we use a pretrained VGG19/Resnet18 to initialize the discriminator. In short, the discriminator, being pretrained on ImageNet, can quickly learn to distinguish between real and fake images. This limits the reliability of the gradient information from the discriminator, which in turn renders the training of a proper generator extremely challenging or even impossible. This is a well-known issue with GAN training (Goodfellow et al., 2014) where the training of the generator and discriminator must strike a balance. This phenomenon is covered in (Arjovsky et al., 2017) Section 3 (illustrated in their Figure 2) as one motivation for works on Wasserstein GANs.\", \"regarding_experiments\": \"\"}",
"{\"title\": \"Answer to AnonReviewer1 (Part 3/3)\", \"comment\": \"\", \"regarding_experiments\": \"\", \"rev\": \"\\u201cI think even it just comparable with GAN, it is interesting if there is no mode collapsing and easy to train. However, it has no proper imagenet results (it has a subset, but only some generated image shows here).\\u201d \\u201cThe empirical results could also be made more stronger by including more relevant baseline methods and more systematic study of the effectiveness of the proposed approach.\\u201d\\n\\u00a0\\nWe compare our results with Miyato (ICLR 2018), Bikowski (ICLR 2018) and Ravuri et al. (ICML 2018) which are very strong up-to-date baselines (all published in 2018) and were the state-of-the-art results by the time we submitted the paper.\\n\\u00a0\\nRegarding ImageNet results, note that in this paper we do not propose to perform conditional generation. All (very recent) papers reporting IS/FID for ImageNet perform conditional generation. Our ImageNet results should be compared with (Ravuri et al., 2018) for the Daisy portion, and (Salimans et. al, 2016; Zhao et al., 2017) for the ImageNet dogs portion. Again, it is not fair to compare our results with the ones from large-scale experiments or with systems that explicitly model conditional generation.\\n\\u00a0\\nNote that the focus of this paper is to demonstrate that we can train effective generative models without adversarial training by employing frozen pretrained neural networks. We selected benchmarks that have been used for the last three/four years by the community on gen. models: CIFAR10, CelebA, MNIST, STL10. We performed an extensive number of experiments, reported quantitative results using two metrics (IS and FID)\\u00a0 and systematically assessed multiple aspects of the proposed approach:\\u00a0\\n(1)\\u00a0 We checked the advantage of AMA vs MA;\\n(2)\\u00a0 demonstrated the correlation of loss vs. image quality;\\n(3)\\u00a0 evaluated different methods for pretraining the feature extractor (autoencoding, classification);\\u00a0\\n(4)\\u00a0 checked different architectures for the feat. extractor (DCGAN, VGG19, Resnet18);\\n(5)\\u00a0 assessed the impact of the number of features/layers used;\\n(6)\\u00a0 evaluated in-domain and cross domain feature extractors;\\u00a0\\n(7)\\u00a0 tested the benefit of initializing the generator;\\u00a0\\n(8)\\u00a0 evaluated the joint use of multiple feature extractors (VGG19 + Resnet18) for training the same generator;\\n(9)\\u00a0\\u00a0 performed experiments with WGAN-GP initialized with VGG19/Resnet18;\\n(10) presented results for different portions of ImageNet;\\n(11) presented visual comparison of images generated between GFMN and GMMN;\\n(12) compared our results with state-of-the-art methods.\\nMoreover, the improved Sec. 4.2.3 now contains even more experiments and discussions regarding the advantages of using AMA.\\n\\u00a0\\nFinally, we would like to reinforce that our paper presents a solid work backed by an extensive number of experiments and discussions. Moreover, as we present a method that provides evidence for the power of pretrained DCNN representations for learning generative models, we believe our work is a perfect fit for ICLR and is of great interest for its community.\\n\\nPlease let us know if you need any additional clarification which would help you to better evaluate the work and increase the overall rating.\"}",
"{\"title\": \"Answer to AnonReviewer2\", \"comment\": \"We would like to thank the reviewer for the positive feedback. We have done a careful proof reading on the paper and addressed the typos pointed by the reviewer. Additionally, we have greatly improved and expanded sections 2.4 and 4.2.3 and added the new section 4.3 as well as two new appendices (A.9 and A.10). In the post destined to all the reviewers, we give more details about the main changes in the new version of the paper.\\n \\nWe would like to reinforce that our paper presents a solid work backed by an extensive number of experiments and discussions. Moreover, as we present a method that evidence the power of pretrained DCNN representations for training generative models, we believe that our work is a perfect fit for ICLR and of big interest for its community.\\n\\nPlease let us know if you need any additional clarification which would help you to better evaluate the work and increase the overall rating.\"}",
"{\"title\": \"Answer to AnonReviewer3\", \"comment\": \"We would like to thank the reviewer for the positive feedback.\\u00a0 We have done a careful proof reading on the paper and addressed the typos pointed by the reviewer. Additionally, we have greatly improved and expanded sections 2.4 and 4.2.3 and added the new section 4.3 as well as two new appendices (A.9 and A.10). In the post destined to all the reviewers, we give more details about the main changes in the new version of the paper.\\n\\u00a0\\nWe would like to reinforce that our paper presents a solid work backed by an extensive number of experiments and discussions. Moreover, as we present a method that evidence the power of pretrained DCNN representations for training generative models, we believe that our work is a perfect fit for ICLR and of big interest for its community.\\n\\nPlease let us know if you need any additional clarification which would help you to better evaluate the work and increase the overall rating.\"}",
"{\"title\": \"We have submitted a revised version of the paper.\", \"comment\": \"In order to better address the questions and suggestions from the reviewers, we have uploaded a revised version of the paper. We have done a careful proof reading on the paper, corrected all the typos pointed and greatly improved/expanded some sections and added new ones, as follows:\\u00a0\\n\\n(1) We have improved and expanded \\u201cSec. 2.4. Matching Features with ADAM Moving Average\\u201d, which now includes a more detailed description of our proposed Adam Moving Average (AMA); and also brings more information regarding the motivation and intuition behind AMA;\\n\\n(2) We have improved and expanded \\u201cSec 4.2.3. Adam Moving Average and Training Stability\\u201d, which now contains more detailed experiments to further demonstrate the advantage of AMA over the simple Moving Average (MA). This section also contains more discussions about AMA vs MA experimental results;\\n\\n(3) We have added a new section \\u201c4.3 Discussion\\u201d, which contains a more thorough discussion about the experimental results and comparison with state-of-the-art adversarial methods and MMD-based methods.\", \"we_have_added_two_new_appendices\": \"(1) \\u201cA.9. Impact of Adam Moving Average for VGG19 feature extractor.\\u201d, which presents experimental results that also indicate the advantage of AMA over MA when VGG19 feature extractor is employed.\\n\\n(2) \\u201cA.10. Visual Comparison between GFMN and GMMN Generated Images.\\u201d, which shows a visual comparison between images generated by GFMN and GMMN (Li et al., 2015).\\n\\n(3) \\\"A.11. Sampling from the Pretrained Decoder vs. Sampling from GFMN models.\\\",\\u00a0 where we present a comparison between images generated by sampling from the decoder (with no additional training from GFMN) and images generated by GFMN.\\n\\n(4) \\\"A.12. Impact of using Global Mean Features vs. Minibatch-wise Mean Features of the REAL data.\\\", where we demonstrate that if we have a mean for all the real dataset (global mean), one still needs moving average for the generator (fake) and that adam moving average outperforms the simple Moving average.\\n\\n(5) \\\"A.13. Autoencoder features vs. VGG19 features for CelebA.\\\", which presents results that evidence the superiority of VGG19 features for CelebA dataset.\\n\\n*** Appendices A11, A12 and A13 added after we received the review from Rev #4 *** \\n\\n\\nWe hope we addressed here the main concerns of the reviewers and that the new revised paper will help them in further appreciating the technical contributions of the paper and in\\u00a0 improving their overall assessment. We think our paper brings an exciting result to the deep learning community that conveys a simple\\u00a0 and yet exciting message:\\u00a0 feature matching in the space of pre-trained deep CNN allows an efficient training of generative models that circumvents the cumbersome min/max game in GANs.\"}",
"{\"title\": \"Review for \\\"Generative Feature Matching Networks\\\"\", \"review\": \"The paper proposes a non-adversarial feature matching generative model (GFMN). In feature matching GANs, the discriminator extract features that are employed by the generator to match the real data distribution. Through the experiments, the paper shows that the loss function is correlated with the generated image quality, and the same pretrained feature extractor (pre-trained on imagenet) can be employed across a variety of datasets. The paper also discusses the choice of pretrained network or autoencoder as the feature extractor. The paper also introduces an ADAM-based moving average. The paper compares the results with on CIFAR10 and STL10 with a variety of recent State-of-the-art approaches in terms on IS and FID.\\n\\n+ The paper is well written and easy to follow. - However, there are some typos that should be addressed. Such as:\\n\\u201cThe decoder part of an AE consists exactly in an image generator \\u201d\\n\\u201cOur proposed approach consists in training G by minimizing\\u201d\\n\\u201cDifferent past work have shown\\u201d -> has\\n\\u201cin Equation equation 1 by\\u201d\\n\\u201chave also used\\u201d better to use the present tense.\\n\\n+ It suggests a non-adversarial approach to generate images using pre-trained networks. So the training is easier and the quality of the generated images, as well as the FID and IS, are still comparable to the state-of-the-art approaches.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper consists of two contributions: (1) using a fixed pre-trained network as a discriminator in feature matching loss ((Salimans et al., 2016). Since it's fixed there is no GAN-like training procedure. (2) Using \\\"ADAM\\\"-moving average to improve the convergency for the feature matching loss.\\n\\nThe paper is well written and easy to follow but it lack of some intuition for the proposed approach. There also some typo, e.g. \\\"quiet\\\" -> quite. Overall, it's a combination of several published method so I would expect a strong performance/analysis on the experimental session.\", \"detailed_comment\": \"For contribution (1):\\n\\nThe proposed method is very similar to (Li et al. (2015)) as the author pointed out in related work besides this work map to data space directly. Is there any intuition why this is better? \\n\\nThe proposed loss (the same as (Salimans et al., 2016)) only try to matching first-order momentum. So I assume it is insensitive to higher-order statistics. Does it less successful at producing samples with high visual fidelity?\\n\\nFor contribution (2):\\n\\n\\\"one would need big mini-batches which would result in slowing down the training.\\\" why larger batch slowing down the training? Is there any qualitative results? Based recent paper e.g. big gan, it seem the model can benefit a lot from larger batch. In the meanwhile, even larger batch make it slower to converge, it can improve throughput. \\n\\nAgain, can the author provide some intuition for these modification? It's also unclear to me what is ADAM(). Better to link some equation to the original paper or simply write down the formulation and give some explanation on it.\", \"for_experiments\": \"I'm not an expert to interpret experimental results for image generation. But overall, the results seems not very impressive. Given the best results is using ImageNet as a classifier, I think it should compare with some semi-supervised image generation paper.\\n\\nFor example, for CIFAR results, it seems worse than (Warde-Farley & Bengio, 2017), Table 1, semi-supervised case. If we compare unsupervised case (autoencoder), it also seems a lot worse. \\n\\nAppendix A.8 is very interesting / important to apply pre-trained network in GAN framework. However, it only say failed to train without any explanation.\\n\\nI think even it just comparable with GAN, it is interesting if there is no mode collapsing and easy to train. However, it has no proper imagenet results (it has a subset, but only some generated image shows here). \\n\\nIn summary, this paper provide some interesting perspectives. However, the main algorithms are very similar to some existing methods, more discussion could be used to compare with the existing literature and clarify the novelty of the current paper. The empirical results could also be made more stronger by including more relevant baseline methods and more systematic study of the effectiveness of the proposed approach. I tend to give a weak reject or reject for this paper.\\n\\n---\\n\\nUpdate post in AC discussion session.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting results supported by experiments\", \"review\": \"The paper introduces Generative Feature Matching Networks (GFMNs) which is a non-adversarial approach to train generative models based on feature matching. GFMN uses pretrained neural networks such as Autoencoders (AE) and Deep Convolutional Neural Networks (DCNN) to extract features. Equation (1) is the proposed loss function for the generator network. In order to avoid big mini-batches, the GFMN performs feature matching with ADAM moving average.The paper validates its proposed approach with several experiments applied on benchmark datasets such as CIFAR10 and ImageNet.\\n\\nThe paper is well-written and straight-forward to follow. The problem is well-motivated by fully discussing the literature and the proposed method is clearly introduced. The method is then validated using several different experiments.\", \"typos\": \"** Page 1 -- Paragraph 3 -- Line 8: \\\"(2) mode collapsing in not an issue\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rkGG6s0qKQ | The GAN Landscape: Losses, Architectures, Regularization, and Normalization | [
"Karol Kurach",
"Mario Lucic",
"Xiaohua Zhai",
"Marcin Michalski",
"Sylvain Gelly"
] | Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of ``tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. | [
"GANs",
"empirical evaluation",
"large-scale",
"reproducibility"
] | https://openreview.net/pdf?id=rkGG6s0qKQ | https://openreview.net/forum?id=rkGG6s0qKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Skgw2ODtyE",
"ryezN9__TQ",
"H1e5hKOdTm",
"Hyeml__dT7",
"Hye9PEPg6m",
"BJxVfiNqhm",
"H1gxi8FY2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544284335115,
1542126122349,
1542126002012,
1542125546964,
1541596257554,
1541192459982,
1541146263919
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper782/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper782/Authors"
],
[
"ICLR.cc/2019/Conference/Paper782/Authors"
],
[
"ICLR.cc/2019/Conference/Paper782/Authors"
],
[
"ICLR.cc/2019/Conference/Paper782/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper782/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper782/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a large scale empirical comparison between different prominent losses, regularization and normalization schemes, and neural architectures frequently used in GAN training. Large scale comparisons in this field are rare and important and the outcome of the experimental analysis is clearly of interest for practitioners. However, as two of the reviewers point out, the significance of the new insights is limited, and after rebutal all reviewers agree that the paper would profit from a clearer write-up and presentation of the main findings. I see the paper therefore, as lying slightly under the acceptance trashhold.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Intersting large scale empirical comparision that could profit from a clearer presentation.\"}",
"{\"title\": \"Official response to AnonReviewer2\", \"comment\": \"Thank you for the time. We would like to take this opportunity to correct some factually incorrect statements below.\\n\\n[Q] The results of the paper do not give major insights into what are the preferred techniques for training GANs, and certainly not why and under what circumstances they'll work. \\n[A] We respectfully disagree. To our knowledge, this is only the second work which attempts to fairly and systematically compare GANs in a large-scale setting. The main conclusions of our work (about NS-GAN, spectral normalization, and gradient penalty) hold across several datasets and architectures. \\n\\n[Q] But there is no attempt to generalize the findings (e.g. new datasets not from original study, changing other parameters and then evaluating again if these techniques help etc.),\\n[A] We again respectfully disagree -- both LSUN and CelebaHQ are used for the first time in such a large-scale evaluation. In fact, none of the techniques were previously evaluated on CelebaHQ. Furthermore, even if some data sets, such as LSUN, were used previously, the comparison to other works was always done by the authors of the new method usually with additional changes, such as architectural decisions and optimization tricks. \\n\\n[Q] Not clear if the improvement in performance is statistically significant, how robust it is to changes in other parameters etc.\\n[A] In this we take care of systematically evaluating various design decisions. While the space of design decisions is too large to search over, we focus on the main design choices and provide some conclusions in this context. Performance improvements obtained by both spectral norm and gradient penalty are statistically significant as seen in the plots -- the performance with respect to the baseline is far outside of the two standard errors of the median in most settings.\\n \\n[Q] The authors also rely mostly on the FID metric, but do not show if and how there is improvement upon visual inspection of the generated images (i.e. is resolution improved, is fraction of images that look clearly 'unnatural' reduced etc.) \\n[A] FID was shown to correlate well with perceived image quality (e.g. precision) and mode coverage (recall). The evidence can be found in [1], and [2]. As such, a reduction in FID corresponds both to improved image quality, as well as improved mode coverage. IN practice, a 10% drop in FID is visible to a human, and samples can be seen in the Appendix. While it is not a perfect metric, it is arguably useful for sample-based relative comparison of generative models. \\n[1] https://arxiv.org/abs/1711.10337\\n[2] https://arxiv.org/abs/1806.00035\\n\\n[Q] The authors use numerous jargon words to describe the techniques studied (e.g. dragon penalty, gradient penalty, spectral normalization, Gaussian process regression in the bandit setting) but they do not explain them, give mathematical formulations, making it hard to the non-expert reader to understand what are these techniques and why are they introduced. \\n[A] Most of these are described in Section 2 (in particular, discussion on regularization and penalties is in Section 2.2). Describing all aspects of these techniques would require substantially more space and hence we refer to the original work for precise formulation.\\n\\n[Q] With lack of clear novel insights, or at least more systematic study on additional datasets of the 'winning' techniques and a sensitivity analysis, the paper does not give a valuable enough contribution to the field to merit publication. \\n[A] We respectfully disagree: we believe that for GAN practitioners our paper presents many useful insights, namely: NS-GAN performs well, spectral norm is a good default normalization technique, gradient penalty should also be considered, even in combination with spectral norm but will cost substantially more in terms of computational resources, popular metrics such as KID and FID result in the same relative ordering of the models so there is no point in computing both, most resnet tricks do not matter, etc. All of these insights are supported by a fair and unbiased rigorous experimental process. On top of that, our experiments are reproducible (as already reported by other works), we shared the resulting code and the pre-trained models.\"}",
"{\"title\": \"Official response to AnonReviewer1\", \"comment\": \"Thank you for the comments, please find our responses to specific points below.\\n\\n[Q] \\u201cAs far as I can see the most important take home message of the paper can be summarized in \\\"one should consider non-saturating GAN loss and spectral normalization as default choices [...] Given additional computational budget, we suggest adding the gradient penalty [...] and train the model until convergence.\\\"\\n[A] While we want this study to be approachable by non-experts, some level of formalism is required as our main audience are researchers working on or interested in GANs. The summary you provided is indeed correct -- coupled with our open-sourced code, it allows a non-expert to train a GAN with state-of-the-art methods without needing to understand the details. On the other hand, for more experienced researchers, we provide more details on which design choices generalize to new settings and identify the biggest obstacles towards fair and unbiased quantitative evaluation of generative models. \\n\\n[Q] Limited amount of new insight.\\n[A] Our paper presents many useful insights, namely: NS-GAN performs well, spectral norm is a good default normalization technique, gradient penalty should also be considered, even in combination with spectral norm but will cost substantially more in terms of computational resources, popular metrics such as KID and FID result in the same relative ordering of the models so there is no point in computing both, most Resnet tricks do not matter, etc. All of these insights are supported by a fair and unbiased rigorous experimental process. On top of that, our experiments are reproducible (as already reported by other works), we shared the resulting code and the pre-trained models.\\n\\n[Q] Clarification and exposition of plots.\\n[A] Say that you had access to a GPU and had to train a model (loss+penalty+architecture). How many hyperparameter settings would you need to consider to achieve a certain quality? The FID from the plot is the estimate of the min FID computed by bootstrap estimation and the line-plots show this relationship. In other words, given a computing budget, which model should you pick? We will provide additional details in the caption of the plot.\\n\\n[Q] Bayesian optimization and variance.\\n[A] We agree and will provide more details. When the sequential Bayesian optimization chooses the next set of hyperparameter combinations to test we run the model once (per hyperparameter combination) and report the scores to the optimizer. Then, the optimization algorithm takes these scores into account when selecting the next set of hyperparameters. The algorithm itself trades-off exploration and exploitation and it can explore hyperparameters \\\"close\\\" to the existing ones if they seem promising. Hence, the averaging happens implicitly during the search.\\n\\n[Q]: Studies and experiments. Stating that lower is better in the plots.\\n[A]: Study is a set of experiments (say a study on the impact of the loss). Experiment is a concrete run with certain hyperparameters. Stating lower is better is a good idea, we will add this to the captions.\"}",
"{\"title\": \"Thank you for the actionable insights\", \"comment\": \"[Q] My one ask would have been a survey of how activations might affect performance. I sense that everyone has settled upon LeakyReLUs for internal layers, but a survey of that work and experimentation within the authors' framework would have been nice.\\n[A] We agree that this is an interesting question in it\\u2019s own right and should and will be explored more rigorously in future work. At this point, it seems like the number of parameters and whether skip-connections are used is much more impactful.\\n\\n[Q] It would be interesting to see what these metrics would reveal when applied to other types of data (e.g. scientific images).\\n[A] We are aware of several works in the area of scientific images, such as [1] and [2], where GANs were successfully applied on 2D image snapshots from N-body simulations. The main issue for us at this point is having access to such data sets. Nevertheless, as these data sets become available for public, we will happily include them within our framework and investigate whether the conclusions extend to data sets beyond natural images.\\n[1] https://arxiv.org/abs/1702.00403\\n[2] https://arxiv.org/abs/1801.09070\\n\\n[Q] I feel the discussion on loss was rushed, and I gained no insight on what the authors thought was a prominent difference between the three losses studied.\\n[A] The theoretical differences between these losses were studied in detail in the corresponding publications. From the practical side, it\\u2019s unclear which statistical divergence to optimize, in particular whether to pick (i) an f-divergence such as Chi-squared implemented by LS-GAN, or (ii) an integral probability metric such as Wasserstein distance, or (iii) a loss function which doesn\\u2019t correspond to any statistical divergence, such as NS-GAN. Hence, we wanted to provide some insight on how do these perform within different setups, not necessarily the ones used in the original publications. To this end we uncover that on the considered data sets it's hard to outperform the non-saturating loss combined with regularization and normalization. Apart from this, the empirical evidence doesn\\u2019t allow us to say more and we will clarify this in the manuscript.\\n\\n[Q] For architectures to be a main pillar of the paper, I feel that this area could have been explored in greater detail. \\n[A] We agree with this assessment and we are indeed focusing on regularization and normalization. Our main question here was whether swapping Resnet with SNDCGAN leads to the same insights which is indeed the case. On the other hand, architectures are such a rich area enabling various design choices that they possibly merit a paper on their own. We will clarify the precise goal of the architecture exploration in this work. This being said, one major question we wanted to understand is which Resnet tricks from the literature (all 7 of them) are meaningful in practice and we present an ablation in the Section D of the appendix to conclude that the only relevant one is the number of channels which makes sense as it drastically changes the number of trainable parameters.\\n\\n[Q] The graphs were difficult to parse. I was able to make them out, but perhaps separating the top row (FID and diversity graphs) into separate figures, separate lines, or something would have reduced some confusion. In addition, different charts presenting only one loss function, with their spectral normalization and gradient penalty variants, would have made the effects of the normalization more obvious on the FID distribution graphs. If this can be changed before publication, I would strongly suggest it.\\n[A] What we can do is to separate the top from the bottom figure into separate figures and provide more information in the captions. Furthermore, in Figure 1, for the FID distribution plots, we can group the methods visually (according to the loss function) by drawing a slightly shaded rectangle around results with the same loss (e.g. https://goo.gl/6YeUL1). If you have a specific proposal we would be happy to consider it and update the submission.\\n\\n[Q] In the future, the authors should be careful to provide an anonymous repository for review purposes.\\n[A] This is a good point and we will address this issue in the future.\\n\\n[Q] I would suggest changing the title to be more appropriate and accurate (the researchers are primarily focused on showing the positive and negative effects of normalization across various loss functions and architectures). \\n[A] Given the architecture discussion stated above, this is a valid point. Our current candidate is:\\n\\u201cThe GAN Landscape: The effect of Regularization and Normalization across various Losses and Neural Architectures\\u201d. However, if you have a specific proposal we would be happy to consider it.\"}",
"{\"title\": \"An empirical study of GANs training techniques. Lacks significant novel insights\", \"review\": \"\", \"the_paper_studies_several_different_techniques_for_training_gans\": \"the architecture chosen, the loss function of the discriminator and generator,\", \"and_training_techniques\": \"normalization methods, ratio between updates of discriminator and generator, and regularization.\\nThe method is performing an empirical training study on three image datasets, modifying the training procedure (e.g. changing one of the parameters) and using different metrics to evaluate the performance of the trained network. \\nSince the space of possible hyper-parameters , training algorithms, loss functions and network architecture is huge , the authors set a default training procedure, and in each numerical experiment freeze all techniques and parameters\\nexcept for one or two which they modify and evaluate. \\n\\nThe results of the paper do not give major insights into what are the preferred techniques for training GANs, and certainly not why and under what circumstances they'll work. \\nThe authors recommend using non-saturated GANs loss and spectral normalization when training on new datasets, because these techniques achieved good performance metrics in most experiments. \\nBut there is no attempt to generalize the findings (e.g. new datasets not from original study, changing other parameters and then evaluating again if these techniques help etc.), not clear if the \\nimprovement in performance is statistically significant, how robust it is to changes in other parameters etc. \\nThe authors also rely mostly on the FID metric, but do not show if and how there is improvement upon visual inspection of the generated images (i.e. is resolution improved, is fraction of images that look clearly 'unnatural' reduced etc.) \\n\\nThe writing is understandable for the most part, but the paper seems to lack focus - there is no clear take home message. \\nThe authors use numerous jargon words to describe the techniques studied (e.g. dragon penalty, gradient penalty, spectral normalization, Gaussian process regression in the bandit setting) but they do not explain them, \\ngive mathematical formulations, or insights into their advantages/disadvantages, making it hard to the non-expert reader to understand what are these techniques and why are they introduced. \\n\\nWith lack of clear novel insights, or at least more systematic study on additional datasets of the 'winning' techniques and a sensitivity analysis, the paper does not give a valuable enough contribution to the field to merit publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Okay contribution, but exposition could be better and lacks good take home messages\", \"review\": \"(As a disclamer I want to point out I'm not an expert in GANs and have only a basic understanding of the sub-field, but arguably this would make me target audience of this paper).\\n\\nThe authors presents a large scale study comparing a large number of GAN experiments, in this study they compare various choices of architechtures, losses and hyperparameters. The first part of the paper describes the various losses, architectures, regularization and normalization schemes; and the second part describes the results of the comparison experiments.\\n\\nWhile I wish there were more such studies -- as I believe reproducing past results experimentally is important, and so is providing practical advice for practitioners -- this work in many parts hard to follow, and it is hard to get lot of new insight from the results, or a better understanding of GANs. As far I can see the most important take home message of the paper can be summarized in \\\"one should consider non-saturating GAN loss and spectral normalization as default choices [...] Given additional computational budget, we suggest adding the\\ngradient penalty [...] and train the model until convergence\\\".\", \"pros\": [\"available source code\", \"large number of experiments\"], \"cons\": [\"the exposition could be improved, in particular the description of the plots is not very clear, I'm still not sure exactly what they show\", \"not clear what the target audience of the first part (section 2) is, it is too technical for a survey intended for outsiders, and discusses subtle points that are not easy to understand without more knowledge, but at the same time seems unlikely to give additional insight to an insider\", \"limited amount of new insight, which is limiting as new and better understanding of GANs and practical guidelines are arguably the main contribution of a work of this type\", \"Some suggestions that I think could make the paper stronger\", \"I believe that in particular section 2 goes into too many mathematical details and subtleties that do not really add a lot. I think that either the reader already understand those concepts well (which I admit, I don't really, I'm merely curious about GANs and have been following the action from a distance, hence my low confidence rating to this review), or if they does not, it will be very hard to get much out of it. I would leave out some of the details, shortening the whole sections, and focus more on making a few of the concepts more understandable, and potentially leaving more space for a clearer description of the results\", \"it is not really clear to be what data the graphs show: the boxplots show 5% of what data? does it also include the models obtained by gaussian process regression? and what about the line plots, is it the best model so far as you train more and more models? if so, how are those models chosen and ordered? are they the results of single models or average of multiple ones?\", \"\\\"the variance of models obtained by Guassian Process regression is handled implicitely so we tran each model once\\\"? I do not understand what this means, and I work with hyper-parameter tuning using gaussian processes daily. It should probably be rephrased\", \"at the start of section 3: what is an \\\"experiment\\\"?\", \"in 3.1 towards the end of the first paragraph, what is a \\\"study\\\", is that the same as experiment or something different?\", \"(minor) stating that lower is better in the graphs might be useful\", \"(minor) typo in page 5 \\\"We use a fixed the number\\\"\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A study of the effects of normalization and regularization in GANs\", \"review\": \"This paper seems to be an exposition on the primary performance affecting aspects of generative adversarial networks (GANs). This can possibly affect our understanding of GANs, helping practitioners get the most in their applications, and perhaps leading to innovations that positively affect GAN performance.\\n\\nNormally, expositions such as this I find difficult to recommend for publication. In these times, one can find \\\"best practices\\\" with a reasonable amount of rigor on data science blogs and such. An exposition that I would recommend for publication, would need to exhibit a high sense of depth and rigor for me to deem it publication worthy. This paper, for me, achieves this level of quality.\\n\\nThe authors start off by giving a precise, constrained list of hyperparameters and architectural components that they would explore. This is listed in the title and explained in detail in the beginning of the paper. The authors are right in explaining that they could not cover all hyperparameters and chose what I feel are quite salient ones. My one ask would have been a survey of how activations might affect performance. I sense that everyone has settled upon LeakyReLUs for internal layers, but a survey of that work and experimentation within the authors' framework would have been nice.\\n\\nThe authors then explain the metrics for evaluation and datasets. The datasets offered a healthy variety for typical image recognition tasks. It would be interesting to see what these metrics would reveal when applied to other types of data (e.g. scientific images).\\n\\nThe authors explain, with graphs, the results of the loss, normalization, and architectures. I feel the discussion on loss was rushed, and I gained no insight on what the authors thought was a prominent difference between the three losses studied. Perhaps the authors had no salient observations for loss, but explicitly stating such would be useful to the reader. The only observation I gained as far as this is that non-saturating loss would possibly be stable across various datasets.\\n\\nRegularization and normalization are discussed in much more detail, and I think the authors made helpful and interesting observations, such as the benefits of spectral normalization and the fact that batch normalization in the discriminator might be a harmful thing. These are good takeaways that could be useful to a vast number of GANs researchers.\\n\\nFor architectures to be a main pillar of the paper, I feel that this area could have been explored in greater detail. I feel that this discussion devolved into a discussion, again, about normalization rather than the architectural differences in performance. Unless I am misunderstanding something, it seems that the authors simply tested one more architecture, for the express purpose of testing whether their observations about normalization would hold.\\n\\nAs a bonus, the authors bring up some problems they had in making comparisons and reproducing results. I think this is an extremely important discussion to have, and I am glad that the authors detailed the obstacles in their journey. Hopefully this will inspire other researchers to avoid adding to the complications in this field.\\n\\nThe graphs were difficult to parse. I was able to make them out, but perhaps separating the top row (FID and diversity graphs) into separate figures, separate lines, or something would have reduced some confusion. In addition, different charts presenting only one loss function, with their spectral normalization and gradient penalty variants, would have made the effects of the normalization more obvious on the FID distribution graphs. If this can be changed before publication, I would strongly suggest it.\\n\\nI appreciate that the authors provided source code via GitHub. However, in the future, the authors should be careful to provide an anonymous repository for review purposes. I had to be careful not to allow myself to focus on the author names which are prominent in the repository readme, and one of whom has his/her name in the GitHub URL itself. I didn't immediately recognize the names and thus it was easy for me not to retain them or focus on them. However, if it had been otherwise, it might have risked biasing the review.\\n\\nIn all, I think this is a good and useful paper from which I have learned and to which I will refer in the future as I continue my research into GANs and VAEs. I would suggest changing the title to be more appropriate and accurate (the researchers are primarily focused on showing the positive and negative effects of normalization across various loss functions and architectures). But altogether, I believe this is a paper worth publishing at ICLR.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1gGpjActQ | Hint-based Training for Non-Autoregressive Translation | [
"Zhuohan Li",
"Di He",
"Fei Tian",
"Tao Qin",
"Liwei Wang",
"Tie-Yan Liu"
] | Machine translation is an important real-world application, and neural network-based AutoRegressive Translation (ART) models have achieved very promising accuracy. Due to the unparallelizable nature of the autoregressive factorization, ART models have to generate tokens one by one during decoding and thus suffer from high inference latency. Recently, Non-AutoRegressive Translation (NART) models were proposed to reduce the inference time. However, they could only achieve inferior accuracy compared with ART models. To improve the accuracy of NART models, in this paper, we propose to leverage the hints from a well-trained ART model to train the NART model. We define two hints for the machine translation task: hints from hidden states and hints from word alignments, and use such hints to regularize the optimization of NART models. Experimental results show that the NART model trained with hints could achieve significantly better translation performance than previous NART models on several tasks. In particular, for the WMT14 En-De and De-En task, we obtain BLEU scores of 25.20 and 29.52 respectively, which largely outperforms the previous non-autoregressive baselines. It is even comparable to a strong LSTM-based ART model (24.60 on WMT14 En-De), but one order of magnitude faster in inference. | [
"Natural Language Processing",
"Machine Translation",
"Non-Autoregressive Model"
] | https://openreview.net/pdf?id=r1gGpjActQ | https://openreview.net/forum?id=r1gGpjActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJx6yBXrxN",
"S1lIxAuFyV",
"rke_YdcHkN",
"HJg04FBCAQ",
"rylRpqlCAm",
"ByeImig4AX",
"rkgOEYeNCX",
"SJg6ougVAX",
"Hkx6vOg40m",
"SkxNVPkkTQ",
"rklofD7c3X",
"SyxK4plKnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545053413009,
1544289773902,
1544034431889,
1543555381533,
1543535301771,
1542880029658,
1542879535740,
1542879397042,
1542879333108,
1541498668310,
1541187346818,
1541111089409
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper781/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/Authors"
],
[
"ICLR.cc/2019/Conference/Paper781/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper781/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper781/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": [\"sufficiently strong results\", \"a fast / parallelizable model\", \"Novelty with respect to previous work is not as great (see AnonReviewer1 and AnonReviewer2's comments)\", \"The same reviewers raised concerns about the discussion of related work (e.g., positioning with respect to work on knowledge distillation). I agree that the very related work of Roy et al should be mentioned, even though it has not been published it has been on arxiv since May.\", \"Ablation studies are only on smaller IWSLT datasets, confirming that the hints from an auto-regressive model are beneficial (whereas the main results are on WMT)\", \"I agree with R1 that the important modeling details (e.g., describing how the latent structure is generated) should not be described only in the appendix, esp given non-standard modeling choices. R1 is concerned that a model which does not have any autoregressive components (i.e. not even for the latent state) may have trouble representing multiple modes. I do find it surprising that the model with non-autoregressive latent state works well however I do not find this a sufficient ground for rejection on its own. However, emphasizing this point and discussing the implication in the paper makes a lot of sense, and should have been done. As of now, it is downplayed. R1 is concerned that such model may be gaming BLEU: as BLEU is less sensitive to long-distance dependencies, they may get damaged for the model which does not have any autoregressive components. Again, given the standards in the field, I do not think it is fair to require human evaluation, but I agree that including it would strengthen the paper and the arguments.\", \"Overall, I do believe that the paper is sufficiently interesting and should get published but I also believe that it needs further revisions / further experiments.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"needs more work\"}",
"{\"title\": \"Re: Regarding Mode Breaking\", \"comment\": \"While we fully get your points, we have to say that we respectfully disagree with your opinions and we are afraid that you have misunderstandings on the research area of neural machine translation.\\n \\n- \\u201cI would argue, that the ability of the model to capture said diversity is an important factor to decide whether to deploy such a model.\\u201d\\nWe have worked on neural machine translation for years and have involved in developing a popular public translation service. We have also communicated with multiple translation teams (including both big Internet companies and small startups). According to our own experiences and the messages from other groups, the ability to capture such diversity is indeed an unimportant factor to consider while deploying a model, at least at the current stage. According to our knowledge, the most important factors to consider are accuracy, inference latency, and then the model size, which are also hot research topics in neural machine translation. While we agree that the ability to capture diversity is an interesting research problem, it is not the first priority to consider in real-world machine translation systems and not our focus in this work.\\n \\n- \\u201can imperfect, real-world MT model that is unable to capture multiple modes might output an erroneous translation (corresponding to one mode)\\u201d\\nWhile it is not clear whether an erroneous translation is caused by the inability to capture such diversity, we do believe that improving translation accuracy (e.g., in terms of BLEU) can somehow address the problem. This is exactly our focus in this paper.\\n \\n- \\u201cWhat if one of the modes that the model misses is the correct one, while the mode it converges on is an incorrect mode due to noise in the dataset?\\u201d\\nGood point. Our model cannot model noise in the dataset. Our model aims to converge to the major mode of the training data (this is also the case for most MT models). If the majority of the translations of a certain source sentence in the training data is incorrect, our model will converge to the incorrect mode and output an incorrect translation. For this case, even if a model can well capture multiple modes, it is still very likely to output an incorrect translation, because the incorrect mode is the major one in the training data and a real-world MT system only output one translation usually corresponding to the major mode. That is, the ability to capture multiple modes *does not* help to solve this problem. Furthermore, if one uses a complete random model for MT, one can eventually get the correct translation result by asking the model repeatedly. However, this kind of \\u201cmultimode\\u201d is definately not we want for a real-world MT model.\\n \\n- \\u201cthis model may be able to game the BLEU score which does not evaluate a model on diversity, I still think this approach to non-autoregressive translation is a hack and has limited practical usefulness due to it's inability to sufficiently capture multiple modes in the translation data\\u201d\\nWe are surprised that you say we \\u201cgame the BLEU\\u201d because BLEU \\u201cdoes not evaluate a model on diversity\\u201d. According to this judgement rule, most (maybe >90%) research on neural machine translation, including those most influential ones such as LSTM with attention [1], ConvS2S [2] and Transformer [3], will be meaningless, because their primary contribution is to improve the translation quality in terms of BLEU. In other words, they also \\u201cgame BLEU score which does not evaluate a model on diversity\\u201d. We are afraid this statement might be a negative bias towards our work and the whole area of neural machine translation, in which BLEU is the most widely used metric and improving BLEU is one of the most important goals. \\u201cBLEU does not reflect diversity\\u201d does not mean \\u201cBLEU is not a good measure for translation quality\\u201d.\\n\\n\\n\\n[1] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. ICLR 2015.\\n[2] Gehring J, Auli M, Grangier D, et al. Convolutional sequence to sequence learning. ICML 2017.\\n[3] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NeurIPS 2017.\"}",
"{\"title\": \"Re: Regarding Mode Breaking\", \"comment\": \"\\\"To some extent, a machine translation system doesn\\u2019t require to model **multiple modes** and is not evaluated by whether the model can generate **multiple modes**. \\\"\\n\\nI am fully aware of what Machine Translation is - sure to some extent the MT model is not evaluated on it's ability to capture multiple modes in the data. However, I would argue, that the ability of the model to capture said diversity is an important factor to decide whether to deploy such a model. For example, an imperfect, real world MT model that is unable to capture multiple modes might output an erroneous translation (corresponding to one mode), and the confused user on querying the model again is faced with the same wrong translation because your model is unable to capture the multiple modes in a messy, noisy real-world translation dataset.\\n\\n\\\"Our work, Gu et al. and Kaiser et al. use sentence-level knowledge distillation(KD), which is super effective in all the works. \\\"\\n\\nIf you read the papers carefully you will find that sentence-level knowledge distillation was not used in Kaiser et al., but in Roy et al. which you refuse to cite, even though as R3 pointed out is on arxiv since May 2018. \\n\\nRegarding the example I pointed out, as you yourself acknowledge the NART model will be unable to capture the two modes present in the dataset. What if one of the modes that the model misses is the correct one, while the mode it converges on is an incorrect mode due to noise in the dataset? \\n\\nOn the other hand the models of Kaiser et al., Roy et al., are able to capture this diversity because of the autoregressive prior fit on the latents z. While, this model may be able to game the BLEU score which does not evaluate a model on diversity, I still think this approach to non-autoregressive translation is a hack and has limited practical usefulness due to it's inability to sufficiently capture multiple modes in the translation data, due to the inexpressivity of the choice of z's the authors use. Hence my rating still stands.\"}",
"{\"title\": \"Author Response - Regarding Mode Breaking\", \"comment\": \"Dear reviewer:\\n\\nTo address the concern of mode breaking, we make some discussion here. \\n\\n1. To some extent, a machine translation system doesn\\u2019t require to model **multiple modes** and is not evaluated by whether the model can generate **multiple modes**. \\n\\nMachine translation is a real application which aims at providing the correct translation to users, but not providing multiple diverse translations. For example, Google Translate just provides the best translation result from a set of translation candidates, but not a set of translation results with multiple modes. This is a bit different from other generative modeling tasks such as music generation (WaveNet and parallel-WaveNet).\\n \\nThis claim can also be justified from the evaluation metric of machine translation. In the test data, we usually have bilingual sentence pairs, one side is called the source sentence, the other side is called the reference sentence (ground truth target sentence). Sometimes one source sentence has multiple reference sentences. When we have a translation output, the translated sentence will be compared to each reference sentence and use the **maximum** BLEU score as the correctness of the translation. That is being said, although we have multiple references with multiple modes, we just use BLEU score between the translation results and the most similar reference, but do not evaluate the diversity.\\n\\n2. Sentence-level knowledge distillation can map **multiple modes** to a **single mode**. \\n\\nOur work, Gu et al. and Kaiser et al. (Roy et al.) use sentence-level knowledge distillation(KD), which is super effective in all the works. Sentence-level KD can be considered as using the auto-regressive model output instead of the reference. We have explicitly mentioned this in the paper (\\\"Following Gu et al. (2017); Kim & Rush (2016), we replace the target sentences in all datasets by the decoded output of the teacher models.\\\"). As observed by [1], ART model outputs are more stable and the patterns are clearer.\\n \\nIn the example that the reviewer suggests \\u201dsuppose our dataset consists of sequences of numbers, where with probability 0.5 the sequence is sorted in ascending order and with probability 0.5 it is sorted in descending order\\u201d. We assume the ART model can well capture such modes and further assume there are some training errors: the ascending order sequence is predicted with probability 0.501 while the other one is predicted with probability 0.499. Once we use the greedy search algorithm (e.g., beam search) for distillation, the ART model output is always the in-ascending-order one which has a relatively larger probability.\\n \\nFrom the above discussion, we can see that although the original dataset may be **multiple-mode**, the distilled dataset is reduced to **single mode** by using KD. **single mode** dataset is much easier to train non-autoregressive models. That is also a reason why our simple z works.\\n\\n3. Regarding appendix\\n\\nWe move the model details to appendix due to the paper length requirements. Per the reviewer's request, we are willing to move any parts back to the main body if they are considered to be important to the paper quality. Furthermore, we are also willing to release our reproducible codes and models for testing all the tasks mentioned in the paper.\\n\\nWe believe the performance of our non-autoregressive model is significant and we hope the discussions above address the concerns of the reviewers and ACs. \\n\\n[1] Ott, Myle, Michael Auli, David Granger, and Marc'Aurelio Ranzato. \\\"Analyzing uncertainty in neural machine translation.\\\" ICML 2018\"}",
"{\"title\": \"Concerns persist\", \"comment\": \"I have read the author rebuttal carefully as well as the updated version of their paper. I do believe the paper is well motivated and is addressing an important problem. However, I still have the following concerns about this work, which I feel the authors have not been able to address.\\n\\n+ Do not understand how the z's are being used: Firstly, the authors define the formulation of z's in the appendix, while the definition of z's is quite critical and can dictate whether or not the approach could work. Further, I do not see a clear explanation of how the z's are being used by the decoder. Reviewers are not required to read the Appendix in judging the work and I wonder if the other reviewers who are positive about the work, understand the details of this approach? To me it seems that the choice of z is critical in determining if this method could work, since no mode-breaking can happen in the part where the model produces the y's given x's and z's. \\n\\n+ Mode breaking: I do not understand how the proposed approach is breaking the multiple modes that can arise in a translation problem. Here is a concrete example - suppose our dataset consists of sequences of numbers, where with probability 0.5 the sequence is sorted in ascending order and with probability 0.5 it is sorted in descending order. This dataset has exactly two modes. An autoregressive model can solve this mode breaking problem because the second token depends on the first token, and so on. The non-autoregressive model proposed by Kaiser et al, can also solve this problem since the second latent depends on the first latent and so on.\\n\\nHowever, it is not at all clear to me how the current approach would work in this case. In principle if the z's were some disentangled representation of the targets, then I could imagine that this might work - but the z's proposed by the authors is a fixed linear combination of the embeddings of the source and depend only on the length of the targets. The z's are independent of each other and the y's are independent given the z's and x's. So I do not understand where mode-breaking could be happening?\\n\\nSo while this approach may be working on the WMT'En-De dataset where the sequence sizes are small, I do not see this approach generalizing to harder multimodal translation datasets and longer sequences. Thus my original rating stands.\"}",
"{\"title\": \"Revision to the Paper\", \"comment\": \"Thanks all reviewers for their valuable comments. We updated a new version of the paper by including the following discussions in the Appendix:\\n\\n1. We discuss the previous related works on knowledge distillation in Appendix B.\\n\\n2. We include extra experiments to show the effectiveness of our proposed method in Appendix D.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for the review! We believe there are some misunderstandings here. We respond to the concerns as below. We will also post our source codes and trained models for verification after the double-blind review period.\\n\\n1. Regarding the design choice of z\\n\\nThe reviewer considers that in a non-autoregressive translation model, \\u201cz_j does not depend on z_1, ..., z_{j-1}\\u201d and \\u201cz_1, z_2, ..., z_{T_y} depends only on the length of y\\u201d are unreasonable and red flags. However, before our paper, such setting has been shown to work in non-autoregressive translation. We believe the reviewer\\u2019s understanding to this might be incorrect.\\n\\nGu et al. [1] and we choose to generate z using non-autoregressive ways and ours is a further simplification of [1]. In [1], The hidden z_1, \\u2026, z_{T_y} (the \\u201cfertility\\u201d module) are also mutually independently generated, and have an only limited dependency on y. Please note that the simplicity of z does not mean that the model will definitely suffer from poor translation quality. Although the hidden z is simple, the model itself is a deep neural network, consisting of different components (self attention layer, encoder to decoder attention layer, positional attention layer) that enable the model to learn a complex mapping from x and z to y. \\n\\nWe believe a simple design choice of z is enough and do not list it as a major contribution of our work. Our main technical contribution is to improve the model performance by a more well-designed training algorithm from teachers. Our experimental results show that by our carefully designed training algorithm, a non-autoregressive model with a simple z can achieve near autoregressive performance, while benefiting from the speedup brought by the little overhead of such a simple z. \\n\\n2. Regarding \\\"orders of magnitude faster\\\" related works\\n\\nThanks for pointing out the recent work from Roy et al. [3], which is also an ICLR submission this year. By checking their paper, we can see that the speedup of Roy et al. [3] is not \\\"orders of magnitude faster\\\". It is only 4.08x when reaching the highest performance, comparing to 17.8x in our work. It is not true that the model by Roy et al. is \\\"orders of magnitude faster\\\".\\n\\nThe main reason is that they choose to use a more complex z using an autoregressive module. Such overhead of z will greatly hurt their speedup, which also contradicts with the initial purpose of introducing non-autoregressive modeling. We believe the translation quality of our model (25.2 for WMT En-De, 29.52 for WMT De-En) is significant given the large speedup of our model.\\n \\n[1] Gu, Jiatao, et al. \\\"Non-autoregressive neural machine translation.\\\" ICLR 2018\\n[2] Lee, Jason, Elman Mansimov, and Kyunghyun Cho. \\\"Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement.\\\" EMNLP 2018\\n[3] Roy, Aurko, et al. \\\"Theory and Experiments on Vector Quantized Autoencoders.\\\" arXiv 2018\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for the review! We have added discussions on related references and KD to the paper.\\n\\nIn machine translation, the state-of-the-art model uses beam search and thus we follow to use it in the baseline and make comparisons. Batch size 1 for decoding is a common practice when comparing the speedups of non-autoregressive translation models [1, 2, 3]. We follow the practice of previous works to make a fair comparison. Setting batch size 1 and studying the efficiency is also reasonable. Just consider the applications where the translation computation is done on a portable device (e.g. offline translation app on a smartphone). In such a scenario, the user inputs one sentence and expects the translation result.\\n\\n[1] Gu, Jiatao, et al. \\\"Non-autoregressive neural machine translation.\\\" ICLR 2018\\n[2] Lee, Jason, Elman Mansimov, and Kyunghyun Cho. \\\"Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement.\\\" EMNLP 2018\\n[3] Kaiser, \\u0141ukasz, et al. \\\"Fast Decoding in Sequence Models Using Discrete Latent Variables.\\\" ICML 2018.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thanks for the review! The main contribution of the paper is to show that by our proposed hint-based training algorithm, a simple non-autoregressive model without a complex submodule can reach competitive performance near an autoregressive model, while still being orders of magnitude faster. We think the results and findings are significant and will be helpful to future works in this direction.\", \"we_conduct_the_following_two_experiments_according_to_the_suggestions\": [\"According to our study, the proposed algorithm reduces the percentage of repetitive words by more than 20% in IWSLT De-En task.\", \"We filter out all the sentences whose lengths are at least 40 in the test set of IWSLT De-En, and test the baseline model and the model trained with hints on the subsampled set. It turns out that our model outperforms the baseline model by more than 3 points in term of BLEU (20.63 v.s. 17.48). Note that the incoherent patterns like repetitive words are a common phenomenon among sentences of all lengths, rather than a special problem for long sentences.\", \"As stated in the paper, it is quite difficult to find a uniform fair measure for comparing the speed of non-autoregressive models. Since the speedup of non-autoregressive models comes from their fine-grained parallelism, traditional metrics like FLOPs do not fit. Absolute metrics like latency highly depends on underlying hardware and code implementations. Therefore, we use speedup in our paper as a better relative measure for a fair comparison.\"]}",
"{\"title\": \"good results, okay paper\", \"review\": \"In this paper, the authors propose an extension to the Non Autoregressive Translation model by Gu et. al, to improve the accuracy of Non autoregressive models as compared to the autoregressive translation models.\\nThe authors propose using hints which can occur as\\n1. Hidden output matching by incurring a penalty if the cosine distance between the representation differ according to a threshold. The authors state that this reduces same output word repetition which is common for NART models\\n2. Reducing the KL divergence between the attention distribution of the teacher and the student model in the encoder-decoder attention part of the model.\\n\\nWe see experimental evidence from 3 tasks showing the effectiveness of this technique.\\n\\nThe strengths of this paper are the speedup improvements of using these techniques on the student model while also improving BLEU scores. \\nThe paper is easy to read and the visualisations are useful.\\n\\nThe main issue with this paper is the delta contribution as compared to the NART model is Gu et. al. The 2 techniques, although simple, don't make up for technical novelty.\\nIt would also be good to see more analysis on how much the word repetition reduces using these techniques quantitatively, and performance especially on longer length sequences.\\n\\nAnother issue is the comparison of latency measurements for decoding. The authors state that the hardware and the setting under which the latency measurements are done might be different as compared to previous numbers. Though still impressive speedup improvements, it somehow becomes fuzzy to understand the actual gains.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good results, although knowledge distillation and its use in non-autoregressive NMT should be discussed better.\", \"review\": \"This paper proposes to distill knowledge from intermediary hidden states and\\nattention weights to improve non-autoregressive neural machine translation.\", \"strengths\": \"Results are sufficiently strong. Inference is much faster than for\\nauto-regressive models, while BLEU scores are reasonably close.\\n\\nThe approach is simple, only necessitating two auxiliary loss functions during\\ntraining, and rescoring for inference.\", \"weaknesses\": \"The discussion of related work is deficient. Learning from hints is a variant\\nof knowledge distillation (KD). Another form of KD, using the auto-regressive\\nmodel output instead of the reference, was shown to be useful for non-autoregressive\\nneural machine translation (Gu et al., 2017, already cited). The authors mention using\\nthat technique in section 4.1, but don't discuss how it relates to their work. [1] should\\nalso probably be cited.\\n\\nHu et al. [2] apply a slightly different form of attention weight distillation.\\nHowever, the preprint of that paper was available just over one month before the\\nICLR submission deadline.\", \"questions_and_other_remarks\": \"Do the baselines use greedy or beam search?\\n\\nWhy batch size 1 for decoding? With larger batch sizes, the speed-up may be\\nlimited by how many candidates fit in memory for rescoring.\\n\\nPlease fix \\\"are not commonly appeared\\\" on page 4, section 3.1.\\n\\n[1] Kim, Yoon and Alexander M. Rush. \\\"Sequence-Level Knowledge Distillation\\\" EMNLP. 2016.\\n[2] Hu, Minghao et al. \\\"Attention-Guided Answer Distillation for Machine Reading Comprehension\\\" EMNLP. 2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Some concerns\", \"review\": \"This work proposes a non-autoregressive Neural Machine Translation model which the authors call NART, as opposed to an autoregressive model which is referred to as an ART model. The main idea behind this work is to leverage a well trained ART model to inform the hidden states and the word alignment of NART models. The joint distribution of the targets y given the inputs x, is factorized into two components as in previous works on non-autoregressive MT: an intermediate z which is first predicted from x, which captures the autoregressive part, while the prediction of y given z is non-autoregressive. This is the approach taken e.g., in Gu et al, Kaiser et al, Roy et al., and this also seems to be the approach of this work. The authors argue that improving the expressiveness of z (as was done in Kaiser et al, Roy et al), is expensive and so the authors propose a simple formulation for z. In particular, z is a sequence of the same length as the targets, where the j^{th} entry z_j is a weighted sum of the embedding of the inputs x (the weights depend in a deterministic fashion on j) . Given this z, the model predicts the targets completely non-autoregressively. However, this by itself is not entirely sufficient, and so the authors also utilize \\\"hints\\\": 1) If the pairwise cosine similarity between two successive hidden states in the student NART model is above a certain threshold, while the similarity is lower than another threshold in the ART model, then the NART model incurs a cost proportional to this similarity 2) A KL term is used to encourage the distribution of attention weights of the student ART model to match that of the teacher NART model. These two loss terms are used in different proportions (using additional hyperparameters) together with maximizing the likelihood term.\", \"quality\": \"The paper is not very well written and is often hard to follow in parts. Here are some examples of the writing that feel awkward:\\n\\n-- Consequently, people start to develop Non-AutoRegressive neural machine\\nTranslation (NART) models to speed up the inference process (Gu et al., 2017; Kaiser et al., 2018;\\nLee et al., 2018). \\n\\n-- In order to speed up to the inference process, a line of works begin to develop non-autoregressive\\ntranslation models.\", \"originality\": \"The idea of using an autoregressive teacher model to improve a non-autoregressive translation model has been used in Gu et al., Roy et al., where knowledge distillation is used. So knowledge distillation paper from Hinton et al., should be cited. Moreover, the authors have missed comparing their work to that of Roy et al. (https://arxiv.org/abs/1805.11063), which greatly improves on the work of Kaiser et al., and almost closes the gap between a non-autoregressive model and an autoregressive model (26.7 BLEU vs 27 BLEU on En-De) while being orders of magnitude faster. So it is not true that:\\n\\n-- \\\"While the NART models achieve significant speedup during inference (Gu et al., 2017), their accuracy\\nis considerably lower than their ART counterpart.\\\"\\n\\n-- \\\"Non-autoregressive translation (NART) models have suffered from low-quality translation results\\\"\", \"significance\": [\"The work introduces the idea of using hints for non-autoregressive machine translation. However, I have a technical concern: It seems that the authors complain that previous works like Kaiser et al, Roy et al, use sophisticated submodules to help the expressiveness of z and this was the cause for slowness. However, the way the authors define z seems to have some problems:\", \"z_j does not depend on z_1, ..., z_{j-1}, so where is the autoregressive dependencies being captured?\", \"z_1, z_2, ..., z_{T_y} depends only on the length of y, and does not depend on y in any other way. Given x, predicting z is trivial and I don't see why that should help the model f(y | z, x) help at all?\", \"Given such a trivial z, one can just assume that your model is completely factorial i.e. P(y|x) = \\\\prod_{i} P(y_i|x) since the intermediate z has no information on the y's except it's length.\", \"This is quite suspicious to me, and it seems that if this works, then a completely factorial model should work as well if we only use the \\\"hints\\\" from the ART teacher model. This is a red flag to me, and I am finding this hard to believe.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HylzTiC5Km | GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING | [
"Jacob Menick",
"Nal Kalchbrenner"
] | The unconditional generation of high fidelity images is a longstanding benchmark
for testing the performance of image decoders. Autoregressive image models
have been able to generate small images unconditionally, but the extension of
these methods to large images where fidelity can be more readily assessed has
remained an open problem. Among the major challenges are the capacity to encode
the vast previous context and the sheer difficulty of learning a distribution that
preserves both global semantic coherence and exactness of detail. To address the
former challenge, we propose the Subscale Pixel Network (SPN), a conditional
decoder architecture that generates an image as a sequence of image slices of equal
size. The SPN compactly captures image-wide spatial dependencies and requires a
fraction of the memory and the computation. To address the latter challenge, we
propose to use multidimensional upscaling to grow an image in both size and depth
via intermediate stages corresponding to distinct SPNs. We evaluate SPNs on the
unconditional generation of CelebAHQ of size 256 and of ImageNet from size 32
to 128. We achieve state-of-the-art likelihood results in multiple settings, set up
new benchmark results in previously unexplored settings and are able to generate
very high fidelity large scale samples on the basis of both datasets. | [
"high fidelity images",
"size",
"subscale pixel networks",
"multidimensional upscaling",
"unconditional generation",
"able",
"spn",
"image",
"spns",
"multidimensional"
] | https://openreview.net/pdf?id=HylzTiC5Km | https://openreview.net/forum?id=HylzTiC5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkx8AurWeN",
"S1l8yJLoyV",
"rJgX0gv814",
"SJeStxgqRm",
"H1llqKX56m",
"r1xv8KGcp7",
"H1gOyezcaX",
"H1lNzuZ9a7",
"Bkgl4P-cpm",
"B1eI3IJ5pQ",
"rkeURDYKpQ",
"HkxN569T2X",
"S1gjDP-927",
"HJgrrcCO2X",
"Bke6VkQAY7",
"SJgqbZTaYQ"
],
"note_type": [
"comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544800461896,
1544408797880,
1544085707436,
1543270524957,
1542236551959,
1542232399149,
1542229984069,
1542227979628,
1542227752180,
1542219437588,
1542195150207,
1541414283569,
1541179235040,
1541102140526,
1538301749004,
1538277633936
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper780/Area_Chair1"
],
[
"~Kun_Xu1"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"ICLR.cc/2019/Conference/Paper780/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper780/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper780/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper780/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper780/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"https://arxiv.org/abs/1109.4389 seems to be another relevant reference for AR models using multiple scales\", \"title\": \"Related work\"}",
"{\"metareview\": \"All reviewers recommend acceptance, with two reviewers in agreement that the results represent a significant advance for autoregressive generative models. The AC concurs.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"metareview: significant progress on autoregressive models for image generation\"}",
"{\"comment\": \"Dear authors:\\n\\nThank you for your really interesting and impressive ideas. The idea is really amazing and experimental results are sound. Generating 256x256 imagenet images in the auto-regressive manner is really difficult and your paper gives a really solid solution.\\n\\nHowever, about this paper, I have a concern about the depth-upscaling part. In your experimental results, the bits/dim of SPN and SPN+ depth-upscaling is of no difference for most datasets, and sometiems the SPN+depth-upscaling even performs poorly compared to simply SPN. However, with the depth-upscaling, the sampling time is doubled: every dimension should be sampled twice compared SPN. Can you give more explanations on the benefits of depth-upscaling? Do we really need it given the really impressive results of SPN?\\n\\nAnyway, this is a really solid paper and congratulations.\", \"title\": \"A concern about depth-upscaling\"}",
"{\"title\": \"New revision is now uploaded\", \"comment\": [\"To our reviewers,\", \"Please find our latest revision uploaded. We believe it addresses most of the comments including:\", \"Detailing the number of parameters for each architecture (Table 4)\", \"Clarifying exactly how the slice embedder conditions the decoder\", \"Clarifying depth-upscaling with SPN\", \"Including information about the use of TPU (Appendix C)\", \"Supplying details about the nature of temperature adjustments during sampling\", \"Adding references\", \"Various writing improvements\", \"We are also currently running our setup for the 64x64 and 256x256 samples with the intent to include them shortly in our revision.\", \"We kindly thank our reviewers for their insightful comments which have substantially improved the exposition.\"]}",
"{\"title\": \"thanks for the clarification!\", \"comment\": \"Thanks for the clarification. Agreed a separate release isn't needed. It wasn't clear if you were using the same split as Reed et al doesn't talk about it.\"}",
"{\"title\": \"re: More details on the 128x128 and 256x256 Imagenet benchmarks\", \"comment\": \"Thanks for your thoroughness, AnonReviewer2.\\n\\nThe ImageNet dataset we use for 128x128 and 256x256 generation is the standard ILSVRC [1] benchmark used by classification models. We report final numbers on the official validation set consisting of 50k examples. We hold out 10k examples from the official training set for cross-validation, and train on the remaining 1271167 points.\\n\\nI don\\u2019t believe a separate release is necessary here as the data is freely available [2] and our downsampling scheme is easily reproducible (simply tf.resize_area). \\n\\nWe can be more explicit about this split in the experiment section.\\n\\n[1] - Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.\\n[2] - http://www.image-net.org/challenges/LSVRC/2014/\"}",
"{\"title\": \"reply to AnonReviewer3\", \"comment\": \"Thank you for your comments.\\n\\n--\\n- The authors claim that the proposed approach is more memory efficient than other methods. However, I wonder how many parameters the proposed approach requires comparing to others. It would be highly beneficial to have an additional column in Table 1 that would contain number of parameters for each model.\\n--\\n\\nAs discussed with AnonReviewer2, we will include a table with the number of parameters for each model. Briefly, the models in the paper have between ~50M params and ~650M params in the most extreme case of full multidimensional upscaling on ImageNet 128.\\n\\nIn the case of 256x256 CelebA-HQ, we use a total of ~100M parameters to produce the depth-upscaled 8bit samples in Figure 5 and ~50M parameters to produce the 5bit samples in Figure 7. Compare this to Glow [1], whose blog post [2] indicates that up to 200M parameters are used for 5bit Celeb-A. Thus we have a ~4x reduction in the number of parameters vs Glow, with decisively improved likelihoods (see Table 3); I think this should address your concern about parameter-efficiency. We also note that autoregressive (and other) models are highly compressible at little to no loss (see e.g. [3]), which makes the absolute number of parameters only an initial, rough measure of parameter efficiency.\\n\\n--\\n- All samples are take either at an extremely high temperature (i.e., 0.99) or at the temperature equal 1. How do the samples look for smaller temperatures? Sampling at very high temperature is a nice trick for generating nicely looking images, however, it could hide typical problems of generative models (e.g., see Rezende & Viola, \\u201cTaming VAEs\\u201d, 2018).\\n--\\n\\nI believe there is a misunderstanding here. What we call temperature is a division on the logits of the softmax output distribution. Temperature 1.0 in our case means that the distribution of the trained model is used exactly as predicted by the model, with no adjustments or tweaks during sampling time. *Reducing* the temperature (less than 1.0) is what can hide problems, because it artificially reduces the entropy in the distribution parameterized by the model during sampling time. \\n\\nAs we sample at temperatures of 0.95, 0.99, and 1.0 in the paper, we respectively *slightly*, *barely*, and *do-not-at-all* reduce the entropy in the model's distribution. Thus this concern does not apply and we are actually being comparatively transparent about our model\\u2019s samples (note that Glow shows its best samples at temperature 0.7, but that \\u201ctemperature\\u201d has a different operational meaning in that case).\\n\\n[1] - Kingma et al. https://arxiv.org/abs/1807.03039\\n[2] - https://blog.openai.com/glow/\\n[3] - Kalchbrenner et al. https://arxiv.org/abs/1802.08435\"}",
"{\"title\": \"Reply to AnonReviewer2 (2/2)\", \"comment\": \"--\\n4. Can you clarify how you condition the self-attention + Gated PixelCNN block on the previous slice embedding you get out of the above convnet? There are two embeddings passed in if I understand correctly: (1) All previous slices, (2) Tiled meta-position of current slice. It is not clear to me how the conditioning is done for the transformer pixelcnn on this auxiliary embedding. The way you condition matters a lot for good performance, so it would be helpful for people to replicate your results if you provide all details. \\n--\\n\\nThe output of the slice embedder -- that receives as input all previous slices and the tiled meta-position of the current slice -- is concatenated channel-wise with the 2D-reshaped output of the masked 1D transformer (which in turn receives as input only the current target slice). The resulting concatenated tensor conditions the PixelCNN decoder like $s$ in equation (5) of the Conditional PixelCNN paper [2]. I.e. the tensor $s$ maps, via 1x1 convolutions, to units which bias the masked convolution output for each layer in PixelCNN. The number of hidden units in this pathway is what is referred to as \\\"decoder residual channels\\\" in Appendix B. We will add this description to Section 3.2\\n\\n--\\n5. I also don't understand the depth upscaling architecture completely. Could you provide a diagram clarifying how the conditioning is done there given that you have access to all pixels' salient bits now and not just meta-positions prior to this slice? \\n--\\n\\nThe SPN which models the low-bit-depth image is identical to the exposition in section 3.2, except that the data it operates on has only 3 bits of depth. As we mention in section 3.4, the depth-upscaling SPN achieves its conditioning by concatenating (again channelwise) the full low-bit-depth image, organised into its constituent slices, to the rest of the slice-embedder's inputs. So no matter which target slice is being modelled (for the finest 5 bits), all slices of the 3bit data can be seen by the slice embedder when it produces context for the fine bits of a target slice. We will further clarify it and see how to add a diagram for this.\\n\\n--\\n6. It is really cool that you don't lose out in bits/dim after depth upscaling that much. If you take Grayscale PixelCNN (pointed out in the anonymous comment), the bits/dim isn't as good as PixelCNN though samples are more structured. There is 0.04 b.p.d difference in 256x256, but no difference in 128x128. Would be nice to explain this when you add the citation.\\n--\\n\\nThanks for the observation, we will note this. The ordering in the SPN, considering both subscaling and upscaling, is indeed quite different from the vanilla ordering and it\\u2019s nice to see that the NLL values are negligibly affected.\\n\\n--\\n7. The architecture in the Appendix can be improved. It is hard to understand the notations. What are residual channels, attention channels, attention ffn layer, \\\"parameter attention\\\", conv channels? \\n--\\n\\nThanks for bringing this to our attention. We will add the figures/explanations discussed and reference this hyperparameter table so that it's all clear. The attention parameters listed are configurable hyperparameters of the open source Transformer implementation in tensor2tensor [3] on github.\\n\\n\\n[1] - Kalchbrenner et al. https://arxiv.org/abs/1802.08435\\n[2] - Oord et al. https://arxiv.org/abs/1606.05328 \\n[3] - https://github.com/tensorflow/tensor2tensor\\n\\nAnd we'll fix that typo too.\"}",
"{\"title\": \"reply to AnonReviewer2 (1/2)\", \"comment\": \"Thanks for your thorough review. Addressing your comments will improve the paper.\\n\\n--\\n-1. Can you point out the total number of parameters in the models?\\n--\\n\\nDepending on the dataset, each SPN network has between ~50M (CelebA) and ~250M parameters (ImageNet 128/256). ImageNet64 uses ~150M weights. With depth upscaling, two separate SPNs with non-shared weights model P(3bit) and P(rest given 3bit) respectively, doubling the number of parameters. With explicit size upscaling for ImageNet128, there is a third network (decoder-only) with ~150M parameters which generates the first 3 bits of the first slice. So the maximal number of parameters used to generate a sample in the paper is full multidimensional upscaling on ImageNet 128, where the total parameter count reaches ~650M. We will include the number of parameters for each model in the table, as requested.\\n\\n--\\n-1. Also would be good to know what hardware accelerators were used. The batch sizes mentioned in the Appendix (2048 for 256x256 Imagenet) are too big and needs TPUs? If TPU pods, which version (how many cores)?\\n--\\n\\nTo reach batch size 2048 we used 256 TPUv3 cores. We will clarify this in the paper.\\n\\n--\\n0. I would really like to know the sampling times. The model still generates the image pixel by pixel. Would be good to have a number for future papers to reference this.\\n--\\n \\nOur current implementation performs only naive sampling, where the outputs of the decoder are recomputed for all positions in a slice to generate each sample. This is convenient, but time-consuming and allows to only rarely inspect the samples coming from our model. The techniques for speeding-up AR inference - such as caching of states, low-level custom implementation, sparsification and multi-output generation [1] - are equally applicable to SPNs and would make sampling reasonably fast; on the order of a handful of seconds for a 256 x 256 x 3 image.\\n\\n--\\n1. Any reason why 256x256 Imagenet samples are not included in the paper? Given that you did show 256x256 CelebA samples, sampling time can't be an issue for you to not show Imagenet 256x256. So, it would be nice to include them. I don't think any paper so far has shown good 256x256 unconditional samples. So showing this will make the paper even stronger.\\n--\\n\\nThanks! We\\u2019ll aim to adding 64 x 64 and 256 x 256 samples in our revision.\\n\\n--\\n2. Until now I have seen no good 64x64 Imagenet samples from a density model. PixelRNN samples are funky (colorful but no global structure). So I am curious if this model can get that. It may be the case that it doesn't, given that subscale ordering didn't really help on 32x32. It would be nice to see both 5-bit and 8-bit, and for 8-bit, both the versions: with and without depth upscaling.\\n--\\n\\nThe 64x64 samples look much better with SPNs. We will aim at including some of the variants that you ask for in our revision.\\n\\n--\\n3. I didn't quite understand the architecture in slice encoding (Sec 3.2). Especially the part about using a residual block convnet to encode the previous slices with padding, and to preserve relative meta-position of the slices. The part I get is that you concatenate the 32x32 slices along the channel dimension, with padded slices. I also get that padding is necessary to have the same channel dimension for any intermediate slice. Not sure if I see the whole point of preserving ordering. Isn't it just normal padding -> space to depth in a structured block-wise fashion? \\n--\\n\\nIt\\u2019s like a meta-convolution: the relative ordering ensures that slices are embedded with weights that depend on the relative 2d distance to the slice that is being generated. Suppose we are predicting the target slice at meta-position (i,j), so that previous slices in the 2d ordering are presented to the slice embedder. For any previous slice (m,n), the weights applied to it are a function of the offset (i-m,j-n), as opposed to their absolute positions (m,n). We will add this clarification to the paper.\"}",
"{\"title\": \"reply to AnonReviewer1\", \"comment\": \"Thank you for the detailed feedback. In the next revision, we will make height, width, and channel indices in equation 1 explicit and make a thorough sweep over the rest of the equations to check for any other undefined parameters.\\n\\nWe will ensure that all figures are referenced, and in the correct order.\"}",
"{\"title\": \"More details on the 128x128 and 256x256 Imagenet benchmarks\", \"comment\": \"Request the authors to provide details for the train/val split for Imagenet 128x128 and Imagenet 256x256 density estimation benchmarks. I wasn't able to find the details in Reed et al (https://arxiv.org/pdf/1703.03664.pdf). Would be ideal if the authors released the splits as done for 32x32 and 64x64 from PixelRNN in http://image-net.org/small/download.php to encourage and be useful for more people to push on this benchmark.\"}",
"{\"title\": \"Sound incremental advance\", \"review\": \"Authors propose a decoder arquitecture model named Subscale Pixel Network. It is meant to generate overall images as image slice sequences with memory and computation economy by using a Multidimensional Upscaling method.\\nThe paper is fairly well written and structured, and it seems technically sound.\\nExperiments are convincing.\", \"some_minor_issues\": \"Figure 2 is not referenced anywhere in the main text.\\nFigure 5 is referenced in the main text after figure 6.\\nEven if intuitively understandable, all parameters in equations should be explicitly described (e.g., h,w,H,W in eq.1)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Solid paper, excellent execution, very important advance in density modeling\", \"review\": \"Summary:\\nThis paper addresses an important problem in density estimation which is to scale the generation to high fidelity images. Till now, there have been no good density modeling results on large images when taken into account large datasets like Imagenet (there have been encouraging results like with Glow, but on 5-bit color intensities and simpler datasets like CelebA). This paper is the first to successfully show convincing Imagenet samples with 128x128 resolution for a likelihood density model, which is hard even for a GAN (only one GAN paper (SAGAN) prior to this conference has managed to show unconditional 128x128 Imagenet samples). The ideas in this paper to pick an ordering scheme at subsampled slices uniformly interleaved in the image and condition slice generation in an autoregressive way is very likely to be adopted/adapted to more high fidelity density modeling like videos. Another important idea in this paper is to do depth upscaling, focusing on salient color intensity bits first (first 3 bits per color channel) before generating the remaining bits. The color intensity dependency structure is also neat: The non-salient bits per channel are conditioned on all previously generated color bits (for all spatial locations). Overall, I think this paper is a huge advance in density modeling, deserves an oral presentation and deserves as much credit as BigGAN, probably more, given that it is doing unconditional generation.\", \"details\": \"\", \"major\": \"-1. Can you point out the total number of parameters in the models? Also would be good to know what hardware accelerators were used. The batch sizes mentioned in the Appendix (2048 for 256x256 Imagenet) are too big and needs TPUs? If TPU pods, which version (how many cores)? If not, I am curious to know how many GPUs were used.\\n0. I would really like to know the sampling times. The model still generates the image pixel by pixel. Would be good to have a number for future papers to reference this.\\n1. Any reason why 256x256 Imagenet samples are not included in the paper? Given that you did show 256x256 CelebA samples, sampling time can't be an issue for you to not show Imagenet 256x256. So, it would be nice to include them. I don't think any paper so far has shown good 256x256 unconditional samples. So showing this will make the paper even stronger.\\n2. Until now I have seen no good 64x64 Imagenet samples from a density model. PixelRNN samples are funky (colorful but no global structure). So I am curious if this model can get that. It may be the case that it doesn't, given that subscale ordering didn't really help on 32x32. It would be nice to see both 5-bit and 8-bit, and for 8-bit, both the versions: with and without depth upscaling.\\n3. I didn't quite understand the architecture in slice encoding (Sec 3.2). Especially the part about using a residual block convnet to encode the previous slices with padding, and to preserve relative meta-position of the slices. The part I get is that you concatenate the 32x32 slices along the channel dimension, with padded slices. I also get that padding is necessary to have the same channel dimension for any intermediate slice. Not sure if I see the whole point of preserving ordering. Isn't it just normal padding -> space to depth in a structured block-wise fashion? \\n4. Can you clarify how you condition the self-attention + Gated PixelCNN block on the previous slice embedding you get out of the above convnet? There are two embeddings passed in if I understand correctly: (1) All previous slices, (2) Tiled meta-position of current slice. It is not clear to me how the conditioning is done for the transformer pixelcnn on this auxiliary embedding. The way you condition matters a lot for good performance, so it would be helpful for people to replicate your results if you provide all details. \\n5. I also don't understand the depth upscaling architecture completely. Could you provide a diagram clarifying how the conditioning is done there given that you have access to all pixels' salient bits now and not just meta-positions prior to this slice? \\n6. It is really cool that you don't lose out in bits/dim after depth upscaling that much. If you take Grayscale PixelCNN (pointed out in the anonymous comment), the bits/dim isn't as good as PixelCNN though samples are more structured. There is 0.04 b.p.d difference in 256x256, but no difference in 128x128. Would be nice to explain this when you add the citation.\\n7. The architecture in the Appendix can be improved. It is hard to understand the notations. What are residual channels, attention channels, attention ffn layer, \\\"parameter attention\\\", conv channels?\", \"minor\": \"\", \"typo\": \"unpredented --> unprecedented\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A new version of PixelCNN-based model for HQ images\", \"review\": \"General:\\nThe paper tackles a problem of learning long-range dependencies in images in order to obtain high fidelity images. The authors propose to use a specific architecture that utilizes three main components: (i) a decoder for sliced small images, (ii) a size-upscaling decoder for large image generation, (iii) a depth-upscaling decoder for generating high-res image. The main idea of the approach is slicing a high-res original image and a new factorization of the joint distribution over pixels. In this model various well-known blocks are used like 1D Transformer and Gated PixelCNN. The obtained results are impressive, the generated images are large and contain realistic details.\\n\\nIn my opinion the paper would be interesting for the ICLR audience.\", \"pros\": [\"The paper is very technical but well-written.\", \"The obtained results constitute new state-of-the-art on HQ image datasets.\", \"Modeling long-range dependencies among pixels is definitely one of the most important topics in image modeling. The proposed approach is a very interesting step towards this direction.\"], \"cons\": \"- The authors claim that the proposed approach is more memory efficient than other methods. However, I wonder how many parameters the proposed approach requires comparing to others. It would be highly beneficial to have an additional column in Table 1 that would contain number of parameters for each model.\\n- All samples are take either at an extremely high temperature (i.e., 0.99) or at the temperature equal 1. How do the samples look for smaller temperatures? Sampling at very high temperature is a nice trick for generating nicely looking images, however, it could hide typical problems of generative models (e.g., see Rezende & Viola, \\u201cTaming VAEs\\u201d, 2018).\\n\\n--REVISION--\\nI would like to thank the authors for their response. I highly appreciate their clear explanation of both issues raised by me. I am especially thankful for the second point (about the temperature) because indeed I interpreted it as in the GLOW paper. Since both my concerns have been answered, I decided to raise the final score (+2).\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Thanks for the reference. Low resolution (small height/width) modeling goes back to Multi-Scale PixelRNN.\", \"comment\": \"Thanks for the reference - we will add the citation in the context of depth upscaling. Size upscaling in AR models goes back to at least the PixelRNN paper (van den Oord et al, 2016, see Multi-Scale section).\", \"some_differences\": [\"Depth upscaling here is done by taking the most significant bits of each channel separately, as opposed to globally across the three channels as in Grayscale PixelCNN.\", \"Multidimensional Upscaling used here combines both size and depth upscaling.\"]}",
"{\"comment\": \"https://arxiv.org/pdf/1612.08185 also proposed both low-resolution and sub-pixel color modelling.\", \"title\": \"Great results but very relevant work discussion missing?\"}"
]
} |
|
HklbTjRcKX | What Information Does a ResNet Compress? | [
"Luke Nicholas Darlow",
"Amos Storkey"
] | The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective. However, this claim was established on toy data. The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model. We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding. We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder. Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget. | [
"Deep Learning",
"Information Bottleneck",
"Residual Neural Networks",
"Information Theory"
] | https://openreview.net/pdf?id=HklbTjRcKX | https://openreview.net/forum?id=HklbTjRcKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1eiDZSyxV",
"BkxB8c0nRm",
"S1eNRjitAQ",
"H1ezF-oF0m",
"S1lQxp4VA7",
"HklHZqy4Am",
"BJgDQIV70Q",
"BkltogsfRm",
"SkxKtVeGCm",
"H1lPaVml07",
"BkxPuVQxA7",
"rylPxV7xCX",
"rye2iGUs3m",
"SJx8hPAq2m",
"HJlXGqI53Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544667491295,
1543461452656,
1543252940068,
1543250297850,
1542896875063,
1542875645482,
1542829599254,
1542791329030,
1542747264683,
1542628543009,
1542628462560,
1542628335204,
1541264036218,
1541232557824,
1541200395246
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper779/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper779/Authors"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper779/Authors"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper779/Authors"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer3"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper779/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper explores an approach to testing the information bottleneck hypothesis of deep learning, specifically the idea that layers in a deep model successively discard information about the input which is irrelevant to the task being performed by the model, in full-scale ResNet models that are too large to admit the more standard binning-based estimators used in other work. Instead, to lower-bound I(x;h), the authors propose using the log-likelihood of a generative model (PixelCNN++). They also attempt visualize what sort of information is lost and what is retained by examining PixelCNN++ reconstructions from the hidden representation at different positions in a ResNet trained to perform image classification on the CINIC-10 task. To lower-bound I(y;h), they perform classification. In the experiments, the evolution of the bounds on I(x;h) and I(y;h) are tracked as a function of training epoch, and visualizations (reconstructions of the input) are shown to support the argument that color-invariance and diversity of samples increases during the compression phase of training. These tests are done on models trained to perform either image classification or autoencoding. This paper enjoyed a good discussion between the reviewers and the authors. The reviewers liked the quantitative analysis of \\\"usable information\\\" using PixelCNN++, though R2 wanted additional experiments to better quantify the limitations of the PixelCNN++ model to provide the reader with a better understanding of plots in Fig. 3, as well as more points sampled during training. Both R2 and R3 had reservations about the qualitative analysis based on the visualizations, which constitute the bulk of the paper. Unfortunately, the PixelCNN++ training is computationally intensive enough that these requests could not be fulfilled during the ICLR discussion phase. While the AC recommends that this submission be rejected from ICLR, this is a promising line of research. The authors should address the constructive suggestions of R2 and R3 and submit this work elsewhere.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Neat approach, but more validation is needed\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"Using a powerful model (like PixelCNN++) is not solving the tightness issue. Since the results of autoencoder experiments could be explained in different ways and have inherent flaws (for example the loss), I suggest to remove this part from this work.\\n\\nAfter reading the response, I will keep my rating.\"}",
"{\"title\": \"PixelCNN++ training curves and an unconditional PixelCNN++\", \"comment\": \"Thank you very much for your response.\\n\\nWe have updated the script to include training curves in the appendix, as you requested. \\n\\nWe have also included training curves and samples for an unconditional PixelCNN++ trained on the encoder split of CINIC-10, in order to give better context to this work.\\n\\nYou are correct in assuming that we have kept the hyper parameters, including architecture, etc. exactly the same as the original PixelCNN++ work. \\n\\nWe hope that this clarifies things even more.\"}",
"{\"title\": \"Response to response to training complexity and MI curves\", \"comment\": \"Thank you for your answer. \\n\\n > [We used CINIC-10 because we could split it three ways to avoid any data-contamination between training the models under scrutiny, training the decoder models, and evaluation of MI. Even after two weeks of training (which equates to 250 epochs of training on the 'validation' subset of CINIC-10) the PixelCNN++ models, convergence was still not reached and the measured losses were still decreasing. We chose to stop there even though the estimation could be better. That all said, given this dataset, 24 hours of training would equate to less than 20 epochs.]\\n\\n Yes indeed, I wasn't thinking about the different dataset anymore, and I agree that this will incur longer training time. Given that the resolution is the same (32x32), and that the training split is of size 90k VS 50k for cifar10, it seems to be that the data regimes do not differ wildly. Seeing the images in the paper that introduces CINIC-10, it appears that images coming from Imagenet do have more variability, even once cropped, so this can also account for slower convergence. Just for the sake of certainty, I would like to know if the convolutional structure (number of layers, units per layer, downsampling...) has been kept the same as in pixelCNN++. I suppose that is what you mean by <<The hyper-parameters for the PixelCNN++ decoder models were set according to the original paper>>. If the architecture is changed, these details could be considered as an appendix in the paper. Since the convergence regime is key here, I think it would also help if you could provide training and test curves for the different pixCNN++ in the annex, with likelihood plotted against the number of epochs, to asses the quality of convergence.\\n\\n\\n> [Regarding the question of figure 3a. Consider that retraining h_j (j > i) to estimate h_i, at a fixed point in the original training time (e.g., at some epoch) involves training those layers where j > i to convergence. h_j at that specific time will not necessarily be converged and when you estimate MI(y, h_j) it will have that layer as fixed/frozen. You are likely correct that h_j will throw the same information away when retraining as it did AT THE END of the original training run. \\n\\nThe important nuance here is that when comparing MI(y, h_i) and MI(y, h_j) at some point before convergence (of the original ResNet classifier), the h_j is fixed to be at a sub-optimal point compared to where it eventually converges later. Therefore, when decoding to estimate MI(y, h_i), h_j is allowed to train and reaches convergence - a state that is different from when it itself is used to measure MI(y, h_j).]\\n\\nThank you for this clarification.\"}",
"{\"title\": \"Response to training complexity and MI curves\", \"comment\": \"Regarding the training complexity and the results you listed on PixelCNN++. This was for the CIFAR-10 dataset, but we're using the CINIC-10 dataset. The difference in size and challenges of this dataset means that those times are mostly irrelevant. We used CINIC-10 because we could split it three ways to avoid any data-contamination between training the models under scrutiny, training the decoder models, and evaluation of MI. Even after two weeks of training (which equates to 250 epochs of training on the 'validation' subset of CINIC-10) the PixelCNN++ models, convergence was still not reached and the measured losses were still decreasing. We chose to stop there even though the estimation could be better. That all said, given this dataset, 24 hours of training would equate to less than 20 epochs.\\n\\nMore points on the curves would always be better, but the cost is simply too high for that. Our intention with this research was to apply MI tracking to this realistic scenario and in so doing add to the ongoing discussion on the information bottleneck.\\n\\nRegarding the question of figure 3a. Consider that retraining h_j (j > i) to estimate h_i, at a fixed point in the original training time (e.g., at some epoch) involves training those layers where j > i to convergence. h_j at that specific time will not necessarily be converged and when you estimate MI(y, h_j) it will have that layer as fixed/frozen. You are likely correct that h_j will throw the same information away when retraining as it did AT THE END of the original training run. \\n\\nThe important nuance here is that when comparing MI(y, h_i) and MI(y, h_j) at some point before convergence (of the original ResNet classifier), the h_j is fixed to be at a sub-optimal point compared to where it eventually converges later. Therefore, when decoding to estimate MI(y, h_i), h_j is allowed to train and reaches convergence - a state that is different from when it itself is used to measure MI(y, h_j).\"}",
"{\"title\": \"Training complexity and MI curves\", \"comment\": \"Hi, thank you for your answers above.\\nLet me come back to a few points below.\\n\\nI do appreciate the computational burden of getting these results. In my experience, using the code provided by the authors, pixCNN++ reaches 2.98 bpd in roughly 12 hours on a gtx1080ti, then reaches 2.96 in 24 hours, and converges to 2.95 bpd in 5 days. It requires multi-gpu setups (for increased batch size) and 5 days to reach 2.92. Assuming the same tendency is observed with your inputs, training all models with a fixed budget of 24 hours would seem to be enough to conclude. This is still heavy, but an order of magnitude more manageable than what you describe. I stand by my initial opinion that more points are required for the results to be convincing.\\n\\n\\n> Regarding the reason why earlier layers have higher I(y, h) than later layers before convergence. This is because of the data processing inequality [...]\\n\\nThe fact that the curves are ordered as predicted by the data processing inequality is very clear. My question regarded the way this estimation is obtained, but was unclear and I will try to reformulate it here.\\nSuppose that one wants to estimate I(y, h_i) and I(y, h_j) with j>i. Given that I(y,h_i) is estimated by freezing the weights for layers {h_k}_{k<=i} and retraining further layers, intuition could suggest \\nthat the layers between h_i and h_j will throw exactly the same amount of information when retraining than what they did when training in the first place, such that h_j in the first network and h_j in the \\nnetwork retrained to estimate I(y, h_i) will contain the same amount of information. And indeed, as training progresses, all MI estimates seem to converge to very close values (last point of figure 3a), such that the data\\nprocessing inequality almost becomes and equality. Could you explain the intuition of why one should expect the retrained network to not have this behaviour? \\nIf this reasoning is not flawed, it raises an other question. The fact that all curves converge to the same value could be due to the fact that the network learns to pass all the relevant information along, as you concluded.\\nBut it could also be due to the fact that the method used to estimate it throws away the same amount of information as the first network, irrespective of the starting point. I would like this to be clarified.\\nIt should also be noted that I am not concerned by the reverse estimation (I(x, h_i)) as in this direction a model of identical capacity is retrained from scratch for all layers.\"}",
"{\"title\": \"Thank you for the discussion\", \"comment\": \"It is hard for me to draw a (subjective) conclusion from the qualitative samples, which is a significant downside of the submission. I wish the authors would have come up with a better experimental setup for this to allow clearer statements, I do realize that this might be hard to do though.\\nOn the other hand, I do find the quantitative analysis based on the Pixel-CNN likelihoods valuable and the observed compression as measured by the generative model is interesting and seems relatively consistent. However, a significant downside is the infeasibility of the method when it comes to more fine-grained ablation studies.\\n\\nAll in all, I will keep my rating as is, because it reflects the pros and cons I see in this paper.\"}",
"{\"title\": \"Claims\", \"comment\": \"Thank you for your response.\", \"the_specific_sentences_you_quote_there_still_hold\": \"this manner of visually inspecting a modern network's processing (a ResNet, here) does offer a unique and intuitive insight into the learning process. Those sentences suggest what can be and is done through using this information theoretic approach to analysis, but leave much of the analysis to the skills and interpretation of the reader. That is the nature of such a visualisation.\", \"regarding_the_math_font_used\": \"we use serif font for random variables/vectors to distinguish quantities such as h (hidden representation) and H (information entropy). This is not an uncommon font usage and we have been consistent throughout the script.\"}",
"{\"title\": \"Thank you for your detailed answer\", \"comment\": \"You seem to agree that one of your major claims is not as clear-cut as your wording in the manuscript suggests and that the interpretation of the samples is rather subjective. However, you still make relatively large claims about them, e.g.:\\n\\n\\\"Sampling images by conditioning on hidden layers\\u2019 activations offers an intuitive visualization to understand what a ResNets learns to forget.\\\"\\n\\n\\\"Analysis of PixelCNN++ samples conditioned on hidden layer activations to illustrate the type of information that ResNet classifiers learn to compress. This is done via the visual demonstration of the sorts of invariances that a ResNet learns.\\\"\\n\\n\\\"...they give insight into what image invariances a ResNet learns...\\\"\\n\\netc.\\n\\nThese claims should be softened and it should be stated, especially in the abstract, that the conclusions drawn here are rather subjective or difficult to interpret, at least the way they are presented right now.\\n\\n--------\", \"minor\": [\"x and y in equation (1) still look strange\"]}",
"{\"comment\": \"Thank you for your review. Minor comments have been accounted for in the amended script. This will be uploaded as a revision shortly. Major comments will be dealt with here.\\n\\nRegarding missing ablation studies. It takes approximately two weeks to train a single PixelCNN++ model for a single data point in Figures 3c and d. We intentionally chose a PixelCNN++ model and the CINIC-10 dataset to ensure the analysis we undertook was as thorough as possible, and that the bounds on the MI were as tight as possible, given current state of the art research and computational feasibilities for this estimation. Ablations studies are simply computationally infeasible for us. \\n\\nRegarding the comments on the generated samples. It seems that what is missing from our discussion is the caveat that at convergence the images most resemble the classes they are in. Take Figure 1 for example: you are likely correct that there is more variation at epoch 1 than at epoch 200, but that variation is largely noise related and not necessarily variation in the specific characteristics that would be more natural. At 200 epochs in this figure, consider that the background (both the ground and the sky) are smooth and arguably more realistic looking than at 1 epoch, yet they still vary from sample to sample. A similar argument can be made for the horse itself, in terms of both colour and somewhat in head shape. In the paper itself we said \\u201cWhen inspecting the samples of Figure 4 (b) and (f), we see that even though the information content is higher at network initialisation, the sampled images look like poor renditions of their classes.\\u201d\\n\\nWe understand that this is a rather subjective perspective on some generated samples, but we are trying to provide insight based on what is available in this analysis. Attempting to understand information compression in a ResNet on these images is a challenging task, hence we value your perspective and input here.\\n\\nRegarding the effect of weight decay. We will have to keep this for future work owing to computational constraints. That said, our stance was when where we adopted a modern, well-known, and widely used architecture and training scheme and analysed it as it stood. There are many changes that could be made - removing batch norm, varying the initialisation scheme, weight decay, optimiser preferences, etc. - but these create a scenario where the number of experiments to run becomes combinatorial. Since each PixelCNN++ model takes two weeks to train on a Titan 1080 GPU, compromises were made.\", \"title\": \"Response to AnonReviewer3\"}",
"{\"comment\": \"Thank you for your review. Amendments and alterations have been made to the paper to account for minor changes. This will be uploaded as a revision shortly. Regarding the major points of the review, we will address these here.\\n\\nIt is unclear from this review what about the results are specious and hard to explain. \\n\\nRegarding the tightness of the bound. There is no way of theoretically knowing how good the bound is. Another perspective to take is that of USABLE information and the analysis thereof, which is arguably in line with the perspective of Schwartz-Ziv & Tishby. That is, just because information is present, does not mean it is useful or even accessible. As an example of this, consider the process of hashing a password: the information is retained and recoverable but useless without the correct access. A PixelCNN++ is state of the art at extracting usable information as an explicit image distribution estimator. It is unlikely that any other recent models would do better. Therefore, we chose to trade computation for as tight of a bound as we could. In fact, it takes approximately two weeks to train a single PixelCNN++ model (which yields a single data point in Figure 3c and d) to interpret the mutual information here. \\n\\nRegarding the autoencoder. We agree that these results can seem counter-intuitive. However, there are a number of possible explanations for these:\\n1. The autoencoder is learning a representation at the bottleneck for which it is easier for a decoder to learn even at the cost of reducing MI (we mentioned this in the paper already).\\n2. The mean-squared-error loss criterion is not well suited to preserving information. This issue is similar to the \\u2018mode averaging\\u2019 problem that causes blurry reconstructions in an autoencoder trained with this loss. Essentially, the target and the reconstruction are different owing to an imperfect loss function and therefore information is discarded.\\n3. Some global components are easier to reconstruct than local features. This is nearly the same as the previous point, but since the MI computation accounts for every pixel in the reconstruction, unless sharp details are being modeled by the autoencoder (which they\\u2019re not), information will be lost.\\n\\nRegarding issues with the connection of this paper with Nash et. al: these were undertaken concurrently and should be consider as such.\", \"title\": \"Response to AnonReviewer1\"}",
"{\"comment\": \"Thank you for your review. Amendments and additions have been made throughout the paper to account for the minor points and suggestions. This will be uploaded as a revision shortly. We will now discuss the major points.\\n\\nFirst, we would like to make apparent the computational burden of in depth analysis in this scenario: each PixelCNN++ model takes approximately two weeks to train on a single Titan 1080ti GPU. This, for example, is the reason there are slightly more data points for h3 in Figure 3c - we determined that this layer was likely to yield interesting results since it is the penultimate layer of the ResNet.\\n\\nRegarding the quality of the bound on I(x, h). There is no way of theoretically knowing how good the bound is. However, PixelCNN++ is state of the art for extracting useful information in this scenario. \\n\\nRegarding the evolution of information across iterations for a fixed layer. First, consider Figure 1:\\n- at very early stages of training (epochs 0 and 1) the generated samples are noisy and less recognisable as horses when compared to later stages.\\n- compare the samples at 10 epochs (roughly the peak of \\u2018fitting\\u2019) to those at 200 epochs (end of compression). The background (ground and sky) is less varied earlier on; the colour of the horse is less varied earlier on (notwithstanding noise such as the first row of d); and the positioning of the horses head is less varied earlier on.\\n\\nUnfortunately these changes are difficult to see unless your PDF viewer does not interpolate pixels and you can zoom sufficiently. It would be ideal to run these experiments on higher resolution images, but simply not computationally feasible at present.\\nThe same sentiments are true for Figure 4 (comparing column (d) and (f), most notably), but owing to the already limited information after average pooling for Figure 5, it is difficult to interpret in the same fashion. Nonetheless, the quality of the images is notably different early and late stage training. \\n\\nRegarding the suggestion of more focus on the evolution of information across layers for a fixed iteration. This is largely model specific based on the capacity of each layer and an expected outcome given the structure of this ResNet architecture. Taking an approach of focus on this sort of information evolution is not the direction of this paper, particularly since it was written for comparison with Schwartz-Ziv & Tishby. Future work focusing on this suggested evolution can be undertaken. \\n\\nRegarding the spread of h2, h3, and h4 in the ResNet. This is more clear in Figure 6 in the appendix, but we have made it more obvious in the text, too. Specifically, h4 is the penultimate layer and is that layer which should exhibit the most compression. \\n\\nRegarding the autoencoder set-up. The decoder of the autoencoder was designed to be as close to an inversion of the encoder structure as possible. Therefore, the bottleneck is defined as the average pooling layer of the ResNet itself. We have made this more clear in the text. Figure 6 in the appendix is an architecture description. Any sort of ablation studies and further hyper-parameter adjustment requires training more PixelCNN++ models for analysis, which is computationally infeasible. Regarding generated images from the h layers for the autoencoder: these almost always look indistinguishable from the input images, so we kept these out for brevity. Finally, since the autoencoder set-up is not comparable to earlier research, we sought to keep these results brief. Nonetheless, the evidence that compression occurs in an autoencoder was an interesting finding and we chose to keep these results in the paper. \\n\\nRegarding the reason why earlier layers have higher I(y, h) than later layers before convergence. This is because of the data processing inequality. Specifically, information can only be discarded and never gained. Consider early stage training before convergence. None of the layers are doing a particularly good job of retaining information about y, iteratively throwing information away. Earlier layers will see representations that have a lower level of information degradation and, therefore, always have more information than later layers. As the network learns and approaches convergence, each layer becomes better and better at passing y-relevant information forward and less information gets discarded (they reach a very similar I(y, h) point at 200 epochs). Figure 3a is quite indicative of what you would expect layers to do regarding class-relevant information retention over training.\\n\\nRegarding the orange curve in Figure 3a. This curve is the original training curve of the network under scrutiny and is directly related to the log-likelihood of the model. We will make this clearer in the caption.\", \"title\": \"Response to AnonReviewer2\"}",
"{\"title\": \"Empirical evaluation of information retained across layers of classification ResNets using pixelCNN decoders.\", \"review\": \"* Summary: \\n\\nThis work is an empirical study of the relevance of the Information Bottleneck principle as a way of understanding deep-learning. It is carried out in the setting of realistically sized networks trained on natural images dataset. This is, in spirit, a meaningful and sensible contribution to the ongoing debate. Being a largely empirical contribution, its value hinges on the exhaustivity and clarity of the experiments carried out. As it stands, I believe that these should be, and can be, improved. Details to support this opinion are given below. A summary of my expectations is given at the end.\\n\\n\\n* Summary of the approach and significance:\\n\\nThe IB principle relies on estimating the mutual information between i) the input and an intermediate layer, I(x,h), and an intermediate layer and the output, I(h, y). Previous work has relied on binning strategies to estimate these quantities. This is not applicable in a real-sized problem such as classification of natural images with deep networks. This paper proposes to invert a first deep model using a second, generative, model which must reconstruct the input of the first given some intermediate layer. The information progressively discarded by the first network should be modelled as uncertainty by the second. This yields a lower bound on the mutual information, with a tightness that depends on the expressivity of the generative model.\", \"i_believe_the_goal_to_be_meaningful_and_a_valuable_contribution\": \"going forward, testing this assumption in realistic setting is essential to the debate. The proposed approach to do this seems sensible to me. It is similar to cited work by Nash et al., however both works are concurrent and so far unpublished and should be considered as complementary point of views on the same problem.\", \"partial_conclusion\": \"The description of the method contains relevant information and is functional, but the writing could be improved.\\n\\t\\n\\t\\n* Experimental results.\\n\\n> The contribution and novelty of this paper is largely empirical. Therefore the experimental results should be held to a high standard of clarity and exhaustivity.\\n\\n\\n*** The choice of dataset:\\nThe experimental setup seems to be fair in terms of dataset / split chosen: the abundance of data for the three steps (encoding, decoding, evaluation) is a notable strength.\\n\\n*** The quality of the lower bound: Uncertainty when reconstructing the image may come from the fact that information has been discarded. Variance may also come from the pixCNN++, which is imperfect. You mention this (paragraph 4.1) but do not take experimental steps to measure it. Please consider reporting the performance of your generative model i) without conditioning, ii) conditioned on one-hot ground truth labels, and optionally iii) on grayscale/downsampled versions of the image without otherwise modifying the training setup. These values will give the reader an idea of the significance of variations in MI measured and give a 'scale' to your figures, strengthening your claims.\\n\\n*** The evolution of compression *accross iterations* for a fixed layer\\nI will focus on the classification setting for now.\", \"qualitatively\": \"Figures 1, 4 and 5 do not convince me that a meaningful evolution in the type of information discarded *across training iterations* can be observed visually. In figures 1 and 4, the network seems to learn invariances to some coloring, and its preferred colours vary across iterations. Beyond that I cannot see much, except maybe for column (f) of figure 4, despite your claim in section 2, paragraph 2.\", \"quantitatively\": \"Curves in Figure 2 a) are more convincing, though a notion of scale is missing, as already discussed. The evolution of I(y; h) across iterations is very clear, in Figure 2 a) and especially 3 a). The evolution of I(x, h) much less so. h3 and h4 do not seem to show anything meaningful. In h2 the decrease in I(x, h) is supported by only 2 points in the curve (epoch 10 to epoch 100, and epoch 100 to epoch 200, figures 2a and 3c). Epochs displayed are also incoherent from one curve to the next (epoch 15 is missing for h2 in fig 3c) which raises suspicion. It appears important to i) display more points, to show that this is not just noise and ii) track more layers to confirm the trend, supported by a single layer so far (see next paragraph). I understand that each of these points require training of a generative model, but I feel it is necessary to make reliable conclusions.\", \"minor\": \"In figure 2, epochs should be added as labels to the colours.\\n\\n*** The evolution of compression *across layers* for a fixed iteration\\n\\t\\t\\nConversely, the evolution of the MI across layers is very convincingly demonstrated, and I feel this is perhaps the main strength of this paper. All curves display consistent trends across layers, and Figure 5 qualitatively displays much more invariance to pose, detail, etc than Figure 4. This is interesting, and could be made more central: i) by making a figure that compares samples across layers, for a fixed iteration, side by side. \\n\\nOn the downside, I believe it is important to track more layers, as it is to me the main interest of your results. The second paragraph of section 5 does not give a good idea of the spread of these layers to someone not familiar with the resnet architecture used. For example, the penultimate layer of the network could be used (the layer at which the most compression is to be expected).\\n\\n*** On the auto-encoder experiments.\\n\\n> Little detail is given about the way the auto-encoder is constructed. In particular, one expects the type of bottleneck used (necessary so that the network does not learn the identity function) to have large impact on the amount of information discarded in the encoding process. This dependency is not discussed. More crucially, experiments with different types / strength of bottleneck are not given, and would, in my opinion, be key to an analysis of this dependency through the IB principle. \\n\\n> Furthermore, no qualitative analysis is provided in this setting.\\n\\n> Without these additions, I find the Auto-encoding setting an unconvincing distraction from the main contribution of this paper. \\t\\n\\n*** main avenues of improvement:\\n\\t\\t\\n> Two kinds of progression in compression are demonstrated in your paper: across layers, and across iterations. \\nAs it stands, results evidence the former more convincingly than the latter, both qualitatively and quantitatively.\\nI believe results could be presented in a way that clearly takes better advantage of this, as I will detail further.\\nMore data points (across layer and epochs) would be beneficial. I feel that the auto-encoder setting, as it stands, is a distraction.\\nI would find this paper more convincing if experiments focused more on showing how layers progressively discard information, and less on the 'training phases' that are so far less clear.\\n\\n*** Additional comments\\n\\nThe following are a number of points that would be worthwhile to discuss in the paper\\n\\n> As it stands, it seems the line of reasoning and experimental setup seems to rely on the chain-structured nature of the considered neural net architecture. Can the same line of reasoning be applied to networks with more general computational graphs, such as dense-nets [a], mulit-scale denseness [b], fractal nets [c] etc.\\n\\n[a] Huang, G.; Liu, Z.; van der Maaten, L. & Weinberger, K. Densely connected convolutional networks CVPR, 2017\\n[b] Huang, G.; Chen, D.; Li, T.; Wu, F.; van der Maaten, L. & Weinberger, K. Multi-Scale Dense Networks for Resource Efficient Image Classification ICLR, 2018\\n[c] https://arxiv.org/abs/1605.07648\\n\\n> Why is it that earlier layers are estimated to have larger MI with the target y than later layers before convergence? Sure, later layers compress certain information about the input x, which could be informative on the response variable y. But since the MI estimate for early layers depends on the same network architecture as the one used to compute the later layers from the early ones, the result seems counter intuitive. See paragraph \\\"forward direction\\\" in section 4.1.\\n\\n> The orange curve in fig 3a estimating I(x;y) is not commented upon. How was it obtained, what is its relevance to the discussion?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An empirical study with specious results\", \"review\": \"## Summary\\n\\nThis paper is an empirical study which attempts to test some of the claims regarding the information bottleneck principle applied to deep learning. To estimate the mutual information (I(x; h) and I(y; h)) in neural networks, the authors define a lower bound on the MI. Then a PixelCNN++ model (for I(x; h)) and a partially frozen classifier (for I(y; h)) are used to compute the MI during classifier and autoencoder training. For both tasks, the authors report the mutual information between hidden layers and input training data first increase for a while and then decrease. The generated images conditioned on hidden layers by the PixelCNN++ were shown to demonstrate the fitting and compression of data in a visual and intuitive fashion.\\n\\nIn general, the paper is well-written and organized. The idea behind the paper is not novel. Shwartz-Ziv & Tishby (2017) and Nash et al. (2018) also attempt to test the information bottleneck principle by estimating the mutual information. The results of this paper are specious and hard to be explained. \\n\\n## Issues with the tightness of the lower bound\\nThe tightness of the lower bound is dependent on the KL divergence between the true conditional distribution p(x|h) and the approximating distribution q(x|h). Does the adopted PixelCNN++ is good enough to approximate the true conditional distribution? There is not any discussion.\\n\\n## Issues with the results of autoencoder\\nThe decrease of the mutual information in autoencoder training is very specious. Since the decoder part of autoencoder should generate better and better images during the training process, does it mean that the PixelCNN++ was worse? Does it imply that the optimization of the PixelCNN++ has some unknown problems?\\n\\n## Issues with the connection between this paper and Nash et al. (2018)\\nThese two paper have used the same lower bound and the same PixelCNN++ for estimating the mutual information. The observations are also similar. Both of these papers found the mutual information between inputs and network layers decreases over the training. The differences of these two papers are the adopted neural networks and the dataset, which are kind of minor.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting empirical work on compression in Resnets with partially inconclusive results\", \"review\": \"-> Summary\\n\\nThe authors propose to extend the analysis of Shwartz-Ziv & Tishby on the information bottleneck principle in artificial neural network training to realistic large-scale settings. They do so by replacing otherwise intractable quantities with tractable bounds in forms of classifiers for I(y;h) and Pixel CNNs for I(x;h). In conclusion, they observe two phases during training, one that maximizes mutual information between input and hidden representation and a second one that compresses the representation at the end of training, in line with the predictions from toy tasks of Shwartz-Ziv & Tishby.\\n\\n-> Quality\\n\\nThe paper is very well written, all concepts are well-motivated and explained.\\n\\n-> Significance\\n\\nThe main novelty is to replace intractable quantities in the analysis of the information bottleneck with tractable bounds in form of auxiliary models. The idea is neat and makes a lot of sense. On the other hand, some of the results and the bounds themselves are well-known and can thus not be considered novel. The main contribution is thus the empirical analysis itself and given some overly confident claims on qualitative results and missing ablation on the quantitative side, I am not convinced that the overall results are very conclusive.\\n\\n-> Main Concerns\\n\\nThe authors make a lot of claims about the qualitative diversity of samples from deeper layers h4 of the network as compared to h1 and h2. However, I do not agree with this. When I look at the samples I see a lot of variations early in training and also in layers h1 and h2. The difference to h4 seems marginal at best and not as clear cut as the authors present it. Thus, these claims should be softened.\\n\\nIn figure 1 I tend to say that samples at epoch 1 are more varied than at epoch 200. In figure 5 (b) seems pretty color invariant and not only (f) as claimed. In fact (f) seems pretty stable and consistent to me.\\n\\nThe bound in equation (2) might be quite loose, depending on the quality of the classifier or pixel CNN. Even though there is no way to test this, it should be discussed.\\n\\nWhat is the effect of weight decay here? I suspect that weight decay plays a crucial role in the final compression phase observed in e.g. figure 3 (c), but might not be a necessary condition to make the network generalize. An ablation experiment verifying or falsifying this statement would be important to conduct and without it I am not convinced that the shown curves are conclusive.\\n\\n-> Minor\\n\\n- You seem to use a weird math font, is this on purpose? It does not seem to be the ICLR standard.\\n- The bound in equation (2) is a standard variational bound and has been used many times, the authors make it sound like it is their contribution. You should maybe cite basic work and recent work on variational information bottleneck here.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BkfbpsAcF7 | Excessive Invariance Causes Adversarial Vulnerability | [
"Joern-Henrik Jacobsen",
"Jens Behrmann",
"Richard Zemel",
"Matthias Bethge"
] | Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities. | [
"Generalization",
"Adversarial Examples",
"Invariance",
"Information Theory",
"Invertible Networks"
] | https://openreview.net/pdf?id=BkfbpsAcF7 | https://openreview.net/forum?id=BkfbpsAcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"-7m5xfM7u2",
"Sklf4aO_jV",
"HJe5elx8jE",
"Byx6IYubX4",
"BylSu67bQV",
"BylmnLXZm4",
"Syel9-Y7gV",
"rkl8B7OVJ4",
"ByeDB22aRQ",
"Bkeye2iT0X",
"B1xjee27aX",
"HJeo0yhma7",
"ByeqLhs7TX",
"Byx23FoQaQ",
"BklM_OiQ6m",
"SyeLf8916Q",
"HklfNANAh7",
"r1e1SFU_2m"
],
"note_type": [
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1583545010886,
1556806953970,
1556639730034,
1547958613265,
1547939181038,
1547937450584,
1544946055820,
1543959357763,
1543519294605,
1543515111023,
1541812211073,
1541812179430,
1541811282469,
1541810612333,
1541810282434,
1541543437960,
1541455402205,
1541069111255
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"~Ryota_Tomioka1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper778/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/Authors"
],
[
"ICLR.cc/2019/Conference/Paper778/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper778/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper778/AnonReviewer1"
]
],
"structured_content_str": [
"{\"comment\": \"This is a novel method to generate adversarial samples.\\nHowever, I could also generate targeted-adversarial attacks such that:\\ndifferent inputs to a model, but the same output from the model\\n\\nI just tried targeted-adversarial attacks on vgg-11 trained on ImageNet\", \"x1\": \"an image of an ant (224x224), logits is z1=vgg11(x1)\", \"x2\": \"an image of a bee, logits is z2=vgg11(x2)\\nz1 and z2 are very different\\nThen, targeted PGD-attack is applied to x2\\nx3 = x2 + noise, z3=vgg11(x3)\\nthe loss function of the attack is sum(||z3- z1||^2) \\nafter many..many iterations of PGD, the loss decreases from ~3000 to 0.06399869173765182\\nx3 still looks the same as x2, L2_norm(x3-x2) is 14.2280\\nNow, z3 ~= z1 and predicted class labels are the same, but x3 is a bee and x1 is an ant\\ndifferent inputs, the same output\", \"title\": \"what is the difference between the attack in the paper and targeted-adversarial attacks?\"}",
"{\"title\": \"Response\", \"comment\": \"Dear Ryota,\\n\\nThank you very much for raising this point.\\n\\nYou are right, we have made a mistake in the proof. We have fixed this in the latest revision. Further, we have refined the definition of the adversarial distribution shift to exclude synergetic effects in the interaction information I(y;z_s;z_n). This assumption is also in line with our shiftMNIST experiments where the interaction information between newly introduced features, original digits and labels is >= 0.\\n\\nWe have acknowledged you for pointing this out!\\n\\nBest,\\nJ\\u00f6rn\"}",
"{\"comment\": \"There might be an error in the proof of Theorem 6. Could the authors clarify what \\\"the properties of conditional mutual information under independence I(y; z_n) =0\\\" means?\\n\\nTo me, it looks like (I might be wrong) the authors are saying I(z_s; y |z_n) = I(z_s; y) if I(y; z_n) = 0 (independence). But I think this is wrong as in the following example:\\n\\nP(z_n = 0) = P(z_n = 1) = 1/2\\nP(z_s = 0) = P(z_s = 1) = 1/2\\n\\ny = z_s if z_n = 1 else 1 - z_s\\n\\nThen\\nI(z_s; y) = I(z_n; y) = 0\\nbut\\nI(z_s; y| z_n) = 1\", \"title\": \"Theorem 6\"}",
"{\"comment\": \"How much does iRevNet differ from fiRevNet?\\nFrom my understanding, the logits (N-classes output) from DCT-II output is optimized with class labels, and the remaining output is optionally optimized with the proposed loss function.\\nSorry if I missed something.\", \"title\": \"isn't fiRevNet is iRevNet with average pooling swapped with DCT-II as (spectral) invertible pooling ?\"}",
"{\"title\": \"Not related to our submission\", \"comment\": \"1) The codebase you are referring to is not related to this paper.\\n\\n2) The paper presented here makes no claims about Cifar10. Our focus is on Imagenet, which is a much more challenging problem. MNIST is used to illustrate our proposed solution clearly.\\n\\n3) The Imagenet model we trained works just as well without invertible DCT pooling. We simply found DCT pooling to make our fi-RevNet conceptually closer to standard ResNets that apply a final global average pooling step, but this is a matter of taste.\\n\\n4) iRevNets differ from fiRevNets, we describe this in the paper, so you should not be surprised if they perform differently as well.\\n\\nIt is unfortunate that DCT pooling does not work for you on Cifar10, but neither did we make any claims about it, nor did we experiment with it or provide any implementation of DCT-pooled fiRevNets on Cifar10. However, feel free to drop me an email and I'll try to do what I can to help you.\\n\\n- J\\u00f6rn\"}",
"{\"comment\": \"The big jump from MNIST then Imagnet is intriguing.\\nI tried training with CIFAR-10 using author's open source i-RevNet code, modified the original code to do pooling with DCT II, \\n\\nand the result is, accuracy is so bad, because everything must fit in 10-class logits.\\n\\nImagenet has 1000-class where the logits is much informative.\\n\\nThe author should clarify the limitation of DCT as invertible pooling\", \"title\": \"Why no CIFAR-10 experiment\"}",
"{\"metareview\": \"This paper studies the roots of the existence of adversarial perspective from a new perspective. This perspective is quite interesting and thought-provoking. However, some of the contributions rely on fairly restrictive assumptions and/or are not properly evaluated.\\n\\nStill, overall, this paper should be a valuable addition to the program.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting angle with some issues in terms of execution\"}",
"{\"title\": \"Thank you for the discussion!\", \"comment\": \"We were glad to see your positive feedback.\\n\\nIndeed we agree some open questions (summarized below in point (II)) remain. Yet, we hope that our efforts to prove the underlying principles of our objective sparks future analysis how/when our optimality assumptions (discussed below in point (I)) can be achieved and why the objective succeeds in our current setting.\\nThat being said, as pointed out above, the objective function itself is one out of 4 major contributions and therefore this analysis would be out of scope for the presented work.\\n\\nThank you once again for the constructive discussion!\\n\\n------------------------------\\n(I) Optimality assumptions:\\n- Lemma 8 (i) (Appendix A): CE- and MLE-term is Maximum Likelihood under a factorized prior p(z_s, z_n) = p(z_s) p(z_n). In the optimum, it thus holds I(z_s; z_n) = 0 as I(z_s; z_n) = KL(p(z_s, z_n) || p(z_s) p(z_n)).\\nFurthermore, in the optimum I(y; z_s) = H(y) = const.\\n- Lemma 8 (ii) and (iii) (Appendix A): If the lower bound is tight (nuisance classifier can decode all information about y in z_n), we minimize I(y; z_n) provably.\\n------------------------------\\n(II) Achieving optimality / Possible alternatives:\\n- Connecting independence of z_n and z_s with the model architecture: Due to information preservation, bijective networks are particularly suitable for our task, but other architectures could be considered.\\n- Tightness of lower bounds: How tight are lower bounds given by a nuisance classifier or alternative lower bounds by the MINE estimator (Belghazi et al. 2018)?\\n- Lack of alternatives: As I(y; z_n) is bounded (Remark 9, Appendix A.2), non-trivial (smaller than H(y)) upper bounds on I(y; z_n) are difficult and to the best of our knowledge, we are not aware of any.\"}",
"{\"title\": \"Thanks for the rebuttal and revision. I would like to raise my score.\", \"comment\": \"Dear Authors,\\n\\nThanks for your reply to my comments. The new revision has improved clarity and provided new supporting evidences.\\n\\nThat being said, (as you agreed) the link from the conceptual goal to the proposed objective has mostly empirical support. Therefore I hope it may encourage future investigation on when and why the proposed objective is successful in achieving the conceptual goal.\\n\\nBest,\"}",
"{\"title\": \"Further concerns after revision?\", \"comment\": \"Dear Reviewer2,\\n\\nwe would be most grateful if you can let us know if there are any further concerns you have after considering the thoroughly revised manuscript, added experiments and answers above.\"}",
"{\"title\": \"Part II\", \"comment\": \"---------------------------------------------------------\", \"q\": \"What do the images generated with z_s from one input and z_n from another input look like (in your method)?\\n--\\nThose images (the metameric samples) are already shown in the last row in the top block of figure 7, we have adapted the figure and added some more description to it, to make everything more clear.\\nIn the baseline the metameric samples are adversarial examples, meaning one can turn any image into any class without changing the logits at all. With our objective (shown on the right side), this is not possible anymore as keeping z_s fixed and exchanging z_n only affects the style of the image, not its class-specific content. The objective has achieved its goal and successfully defended against the metameric sampling attack.\\n\\n---------------------------------------------------------\", \"minor\": \"We have fixed the typos and added the log to the MLE objective, thank you.\\n\\n---------------------------------------------------------\\n\\nThank you once again for the detailed review, we were able to significantly improve the manuscript based on it.\\nWe have revised multiple parts, added new experiments and added discussions to answer your concerns.\\n\\nWe hope we were able to answer everything to your satisfaction, please let us know if there are any more open points.\\n\\nThank you once again!\"}",
"{\"title\": \"Added new experiments on non-bijective networks and proposed objective alongside thorough revision of manuscript\", \"comment\": \"--------------------------------------------------------\\n\\nWe are glad that you find most of our major contributions original, interesting, clear and mathematically sound.\\nWe also thank you for your thoughtful questions and comments, we address them below.\\n\\n--------------------------------------------------------\", \"q\": \"How are findings related to non-bijective networks?\\n--\\nThank you for bringing this up, we have revised the manuscript to answer this important question very clearly to show our identified problems, analysis and conclusion are not limited to bijective networks.\\nWe summarize below.\\n\\n----------\\n-- Our identified problem of excessive invariance occurs in many other networks as well.\\n----------\\n\\nWe have added results on the gradient-based equivalent of our analytic metameric sampling attack to the paper. We match the logit vector of one image with the logits of another image via gradient-based optimization and no norm-based restriction on the input. We do so on an ImageNet-trained state of the art ResNet-154 and see that the problem we have identified in bijective nets is the same here, if not worse as the metameric samples look even cleaner. Qualitative results are added to figure 5.\\n\\nBesides that, multiple papers have observed excessive invariance. On the adversarial spheres problem [1], for instance, the authors show their quadratic network does almost perfectly well while ignoring up to 60% of *semantically meaningful* input dimensions. Another line of work has also shown that similar behavior can appear in ReLU networks as well [2].\\n\\nWe have also added an additional set of experiments to the revised manuscript that shows how cross-entropy trained ResNets fail badly under distribution shifts that exploit their excessive invariance, giving another piece of evidence that our findings are not limited to bijective networks, but applicable to the most successful deep network architecture around as well.\\n\\n----------\\n-- There is a close relationship between bijective nets and SOTA architectures.\\n----------\\n\\nBijective networks are closely related to ResNets, they are in fact provably bijective under mild assumptions, as shown by a recent publication [3]. Further, it has been shown that ResNets and RevNet-type networks differ only in their dimension splitting scheme from one another [4]. And finally, bijective iRevNets have been shown to have many equivalent progressive properties to ResNets throughout the layers of their learned representation [5].\\n\\nIn summary, there is ample evidence, that bijective RevNet-type networks are not the reason for the problems we observe, but rather extremely similar to ResNets, the de-facto state-of-the-art architecture, while providing a powerful framework to study and combat problems like excessive invariance.\\n\\n[1] Gilmer, Justin, et al. \\\"Adversarial spheres.\\\" \\n[2] Behrmann, Jens, et al. \\\"Analysis of Invariance and Robustness via Invertibility of ReLU-Networks.\\\"\\n[3] Behrmann, Jens, David Duvenaud, and J\\u00f6rn-Henrik Jacobsen. \\\"Invertible Residual Networks.\\\"\\n[4] Grathwohl, Will, et al. \\\"FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models.\\\"\\n[5] Jacobsen, J\\u00f6rn-Henrik, Arnold Smeulders, and Edouard Oyallon. \\\"i-RevNet: Deep Invertible Networks.\\\"\"}",
"{\"title\": \"Added new experiments on distribution shift and large batch of metameric samples\", \"comment\": \"--------------------------------------------------------\\n\\nWe thank you very much for acknowledging our work as interesting and novel, as well as for the appreciation of our developed methodologies.\\n\\nWe answer your questions below.\\n\\n--------------------------------------------------------\", \"q\": \"What is the typical behavior of samples shown in Figure 4?\\n--\\nThe metameric samples shown are representative and we have observed similar quality throughout the whole validation set, sometimes with slight colored artifacts though. We have added a large batch of metameric samples to the appendix to give the reader a better idea about their typical behavior. \\n\\n--------------------------------------------------------\\n\\nWe believe your review have substantially improved the manuscript, thank you.\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"Dear Reviewers, we thank you very much for helping us to substantially improve the manuscript.\\n\\nWe have addressed all raised concerns either with additional experiments and results, with additional discussions in the manuscript or through other aspects of our revision.\\n\\nWe were delighted to see the positive reaction by all reviewers to our developed ideas and your suggestions and concerns greatly improved the paper. The new distribution shift experiments, as well as new results and discussion of non-bijective networks and their relationship to bijective ones, significantly increase the practical relevance of the work. \\n\\nGiven the tension between the positive comments to most of our contributions, the ratings and the fact that the main concerns are related to our proposed solution, we would like to point out that the developed training objective is only one out of four major contributions of the paper.\", \"we_list_our_updated_contributions_here_again_for_clarity\": \"1 - We introduce an alternative viewpoint on adversarial examples, one of the major failures in modern machine learning algorithms, give a formal definition of it and show its practical relevance for commonly used architectures in the updated experiments and discussion.\\n\\n2 - We build a competitive bijective ImageNet/MNIST classifier to tractably compute such adversarial examples exactly. Based on this, we provide what may be the first analytic adversarial attack method in the literature.\\n\\n3 - We prove that a major reason for invariance-based vulnerability is the commonly used cross-entropy objective and show from an information-theoretic viewpoint what may be done to overcome this.\", \"4___we_put_our_theoretical_results_into_practice\": \"based on bijective networks we introduce a practically useful loss and illustrate as a proof-of-concept that it largely overcomes the problem of excessive invariance, making it a promising way forward. Additionally, we have now included more quantitative experiments showing robustness to adversarial distribution shifts on a newly introduced benchmark.\", \"in_the_revision_we_have\": \"-- Thoroughly revised and updated the whole manuscript to make all of our contributions more clear and incorporate all raised concerns.\\n-- Updated figures and descriptions and moved large parts of section 2 to the appendix to improve clarity. \\n-- Added an adversarial distribution shift benchmark to stress test our proposed objective and show its effectiveness in challenging settings.\\n-- Added new results on non-bijective networks for the metameric samples and the distribution shift experiments to show non-bijective networks have the same issues as the bijective networks we use. \\n-- Added a discussion on the relationship between ResNets and RevNet-type networks, providing evidence that they are closely related. \\n-- Added additional references from the literature providing evidence of false excessive invariance in non-bijective architectures.\\n-- Added a random batch of metameric samples to the appendix, to showcase the consistency of our results.\\n\\nPlease let us know if you have any more questions or if there is anything else we can do to make you reconsider your rating.\\n\\nThank you once again for your effort.\"}",
"{\"title\": \"Thoroughly revised manuscript uploaded\", \"comment\": \"--------------------------------------------------------\\n\\nWe thank you very much for acknowledging our work being appealing and our contributions being publication-worthy.\\nWe also thank you for your thoughts and comments on the structure of the manuscript.\\n\\n--------------------------------------------------------\", \"q\": \"Lacking detail on bijective network.\\n\\nThe main components we are using are based on Real-NVP[1]/Glow[2] and iRevNet[3] networks, which are widely known and cited in the paper, so we decided not to put too much focus on their details.\\nHowever, in the revision we have added some additional details, for instance, we have added figure 3 that explains the architecture we are using.\\n\\n[1] Dinh, Laurent, Jascha Sohl-Dickstein, and Samy Bengio. \\\"Density estimation using Real NVP.\\\" \\n[2] Kingma, Diederik P., and Prafulla Dhariwal. \\\"Glow: Generative flow with invertible 1x1 convolutions.\\\"\\n[3] Jacobsen, J\\u00f6rn-Henrik, Arnold Smeulders, and Edouard Oyallon. \\\"i-RevNet: Deep Invertible Networks.\\\"\\n--------------------------------------------------------\\n\\nPlease let us know if you have any more comments or concerns!\\n\\nThank you once again.\"}",
"{\"title\": \"Very interesting ideas, could use a few additional experiments to be more convincing\", \"review\": \"This paper explores adversarial examples by investigating an invertible neural network. They begin by first correctly pointing out limitations with the commonly adopted \\\"l_p adversarial example\\\" definition in literature. The main idea involves looking at the preimage of different embeddings in the final layer of an invertible neural network. By training a classifier on top of the final embedding of the invertible network the authors are able to partition the final embedding into a set of \\\"semantic variables\\\", which are the components used for classification of the classifier, and a set of \\\"nuisance variables\\\" which are the complement of the logit variables. This partition allows the authors to define entire subspaces of adversarial images by holding the logit variables fixed and varying the nuisance variables, and applying the inverse to these modified embeddings. The authors are able to find many incorrectly classified images with this inversion technique. The authors then define a new loss which minimizes the mutual information between the nuisance variables and the predicted label.\\n\\nI found the ideas in this paper quite interesting and novel. Starting with the toy problem of adversarial spheres is great, and it's convincing that the inversion technique can be used to find errors on this dataset even when the classification accuracy is (empirically) 100%. The resulting adversarial images generated by applying their technique are also quite interesting, and this is a cool interesting way to study the robustness of networks in non-iid settings.\\n\\nThe main weakness is on the evaluation of their proposed new training objective, and I have a few suggestions as to how to strengthen this evaluation. It would be very convincing to me if the authors could show that their new training objective increases robustness to distributional shift. A potential benchmark for distributional shift could be https://arxiv.org/abs/1807.01697 (or just picking a subset of these image corruptions). If the proposed objective shows improvement on this benchmark (or a related one) then this would be a solid contribution.\\n\\nOne question I have for the authors is how typical the behavior in Figure 4 is? For any fixing of the logits, are all/most metameric samples classifiable by a human oracle? That is do you ever get garbage images from this sampling process. Adding a collection of random samples to the Appendix to demonstrate typical behavior could help demonstrate this.\", \"edit\": \"After paper additions I am changing my score to a 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting problem with an unconvincing solution\", \"review\": \"This paper studies a new perspective on why adversarial examples exist in machine learning -- instead of seeing adversarial examples as the result of a classifier being sensitive to changes in irrelevant information (aka nuisance), the authors see them as the result of a classifier being invariant to changes in relevant (aka semantic) information. They show how to efficiently find such adversarial examples in bijective networks. Moreover, they propose to modify the training objective so that the bijective networks could be more robust to such attacks.\", \"pros\": \"-- clarity is good (except for a few places, e.g. no definition of F(x)_i in Definition 1; Page 6 \\\"three ways forward\\\" item 3: I(y;z_n|z_s) = I(y;z_s) should be I(y;z_n|z_s) = I(y;z_n).)\\n -- the idea is original to the best of my knowledge\\n -- the mathematical motivation is sound\\n -- Figure 6 seems to show that the proposed defense works on MNIST (However, would you provide more details on how you interpolated z_n? Moreover, what do the images generated with z_s from one input and z_n from another input look like (in your method)?)\", \"cons\": \"-- scope: as all the presented problems and solutions assume bijective mapping, I wonder how is it relevant to the traditional perspective of adversarial attack and defense? It seems to me that the contribution of this paper is identifying a problem of bijective networks and then proposing a solution, thus its significance is restricted.\\n -- method: while the mathematical motivation is sound, I'm not sure if the proposed training objective can achieve that goal. To elaborate, I see problems with both terms added in the proposed loss function:\\n (a.) for the objective of maximizing the cross entropy of the nuisance classifier, it is possible that I(y;z_n) is not reduced, but rather the information about y is encoded in a way that the nuisance classifier is not able to decode, similar to what happens in a one-way function (for example, see https://en.wikipedia.org/wiki/Cryptographic_hash_function ). In the MNIST experiments, the nuisance classifier is a three-layer MLP, which may be too weak and susceptible to information concealing.\\n (b.) for the objective of maximizing the likelihood of a factorized model of p(z_s, z_n), I don't see how optimizing it would reduce I(z_s; z_n). In general, even if z_s and z_n are strongly correlated, one can still fit such a factorized model. This only ensures that I(Z_s; Z_n) = 0 for Z_s, Z_n *sampled from the model*, but does not necessarily reduce I(z_s; z_n) for z_s, z_n *used to train the model*. The discrepancy between p(Z_s, Z_n) and p(z_s, z_n) could be huge, in which case one has the model misspecification problem which is another topic.\\n (c.) a side question: why is the MLE objective using likelihood rather than log likelihood? Since the two cross entropy losses are similar to log likelihood, I feel there is a mismatch here.\\n\\n----------------------------------------\", \"after_rebuttal\": \"Thanks for your reply to my comments. The new revision has improved clarity and provided new supporting evidences. I would like to raise my rating to 6.\\n\\nThat being said, (as you agreed) the link from the conceptual goal to the proposed objective has mostly empirical support. Therefore I hope it may encourage future investigation on when and why the proposed objective is successful in achieving the conceptual goal.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The ideas are appealing and should enventually lead to fine contributions but the paper is disbalanced with wrong detail distribution\", \"review\": \"The paper focuses on adversarial vulnerability of neural networks, and more specifically on perturbation-based versus invariance-based adversarial examples and how using bijective networks (with so-called metameric sampling) may help overcoming issues related to invariance. The approach is used to get around insufficiencies of cross-entropy-based information-maximization, as illustrated on experiments where the proposed variation on CE outperforms CE.\\n\\nWhile I am not a neural network expert, I felt that the ideas developed in the paper are worthwhile and should eventally lead to useful contributions and be published. This being said, I did not find the paper in its present form to be fit for publication in a high-tier conference or journal. The main reason for this is the disbalance between the somehow heavy and overly commented first four pages (especially in Section 2) contrasting with the surprisingly moderate level of detail when it comes to bijective networks, supposedly the heart of the actual original contribution. To me this is severely affecting the overall quality of the paper. The contents of sections 3 and 4 seem relevant, but I struggled find out what precisely is the main contribution in the end, probably because of the lack of detail on bijective networks mentioned before. Again, I am not an expert, and I will indicate that in the system of course, but while I cannot completely judge all aspects of the technical relevance and the originality of the approach, I am fairly convinced that the paper deserves to be substantially revised before it can be accepted for publication.\", \"edit\": \"After paper additions I am changing my score to a 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
H1ebTsActm | Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality | [
"Taiji Suzuki"
] | Deep learning has shown high performances in various types of tasks from visual recognition to natural language processing,
which indicates superior flexibility and adaptivity of deep learning.
To understand this phenomenon theoretically, we develop a new approximation and estimation error analysis of
deep learning with the ReLU activation for functions in a Besov space and its variant with mixed smoothness.
The Besov space is a considerably general function space including the Holder space and Sobolev space, and especially can capture spatial inhomogeneity of smoothness. Through the analysis in the Besov space, it is shown that deep learning can achieve the minimax optimal rate and outperform any non-adaptive (linear) estimator such as kernel ridge regression,
which shows that deep learning has higher adaptivity to the spatial inhomogeneity of the target function than other estimators such as linear ones. In addition to this, it is shown that deep learning can avoid the curse of dimensionality if the target function is in a mixed smooth Besov space. We also show that the dependency of the convergence rate on the dimensionality is tight due to its minimax optimality. These results support high adaptivity of deep learning and its superior ability as a feature extractor.
| [
"deep learning theory",
"approximation analysis",
"generalization error analysis",
"Besov space",
"minimax optimality"
] | https://openreview.net/pdf?id=H1ebTsActm | https://openreview.net/forum?id=H1ebTsActm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1gNUsRQlV",
"SJl71edYT7",
"Bylje1dKaQ",
"SJxcoCDYpX",
"ryeP_CPYTm",
"rJg7UAwKp7",
"SygSECvtaX",
"rJgL04k-aQ",
"BJeVd9HTnQ",
"S1eqfHtq3m",
"r1gX-pN9hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544969035529,
1542189019210,
1542188786607,
1542188706298,
1542188654755,
1542188619156,
1542188588686,
1541629133573,
1541393004014,
1541211410022,
1541192954864
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper776/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"ICLR.cc/2019/Conference/Paper776/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper776/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper776/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper776/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper extends the results in Yarotsky (2017) from Sobolev spaces to Besov spaces, stating that once the target function lies in certain Besov spaces, there exists some deep neural networks with ReLU activation that approximate the target in the minimax optimal rates. Such adaptive networks can be found by empirical risk minimization, which however is not yet known to be found by SGDs etc. This gap is the key weakness of applying approximation theory to the study of constructive deep neural networks of certain approximation spaces, which lacks algorithmic guarantees. The gap is hoped to be filled in future studies.\\n\\nDespite the incompleteness of approximation theory, this paper is still a good solid work. Based on fact that the majority of reviewers suggest accept (6,8,6), with some concerns on the clarity, the paper is proposed as probable accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Approximation of Besov spaces by Deep ReLU neural networks.\"}",
"{\"title\": \"Revision has been uploaded\", \"comment\": \"Thank you for your careful reading. We have uploaded a revised version.\", \"the_main_difference_from_the_original_one_is_as_follows\": \"1. Some additional text explanations are added for the definition of m-Besov space.\\n2. We added a few remarks for the approximation error bound in Proposition 1 and Theorem 1.\\n3. We have fixed some grammatical errors and typos.\\n\\nSincerely yours,\\nAuthors.\"}",
"{\"title\": \"Dimensionality d is assumed to be constant (Reply from authors)\", \"comment\": \"Thank you for your instructive question.\\nFirst, in our analysis, the dimensionality d is a fixed constant and is not allowed to increase to infinity as the sample size n goes up. Thus, the curse of sample size does not occur for fixed d. \\n\\nSecond, behind the order notation, there is a term depending on d. Actually, the log(n)^d term is originally comes from (log(n)/d)^d term (more precisely, it comes from D_{K,d} defined in Sec.3.2 where K will be O(log(n))). Thus, this term slowly increases (O(n^\\\\epsilon) for a small constant \\\\epsilon) under an assumption that d <= C log(n) for a sufficiently small C. On the other hand, for the convergence rate n^{-2s/(2s + d)} on the Besov space, d is not allowed to be log(n)-order. Actually, as long as d is log(n)-order, n^{-2s/(2s + d)} does not converges to 0. This contrasts the difference of the two settings, Besov and m-Besov settings.\\n\\nFinally, we also would like to remark that if d is O(log(n)), then the overall convergence rate will be changed. It will depend on the coefficient hidden in the order notation of d = O(log(n)). Showing the precise bound under this condition is out of paper's scope. Thus, we would like to leave that for the future work.\"}",
"{\"title\": \"Reply from authros\", \"comment\": \"We would appreciate your insightful comments.\\n\\n(1)\", \"q\": \"A minor note: some of the references are strange\", \"a\": \"Thank you for your informative suggestion. The citation [Gine & Nickl, 2015] for the minimax optimal rate on a Besov spaces is a comprehensive text book that was not intended to be the original paper but just a nice reference to overview the literature. The reference [Adams & Fournier, 2003] for the interpolation space is also referred as a text book describing overview of the literature and several related topics in details. But, as you pointed out, it is more appropriate to cite original papers. We have cited [Kerkyacharian & Picard, 1992; Donoho et al., 1996] for the minimax estimation rate in a Besov pace, and cited [DeVore, 1998] for the interpolation space characterization of a Besov space. \\n\\n\\nG. Kerkyacharian and D. Picard. Density estimation in besov spaces. Statistics & Probability\\nLetters, 13:15--24, 1992.\\n\\nD. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and Dominique Picard. Density esti-\\nmation by wavelet thresholding. The Annals of Statistics, 24(2):508--539, 1996.\\n\\nD. L Donoho, I. M. Johnstone, G. Kerkyacharian and Dominique Picard. Minimax estimation via wavelet shrinkage. The Annals of Statistics, 26(3):879--921, 1998.\\n\\nR. DeVore. Nonlinear approximation. Acta numerica, 7:51--150, 1998.\"}",
"{\"title\": \"Reply from authors\", \"comment\": \"We would appreciate your detailed feedback on our manuscript.\\n\\n(1)\", \"q\": \"The generalization bounds in Section 4 are given for an ideal estimator which is probably impossible to compute.\", \"a\": \"We believe that it is informative to investigate how well deep learning can potentially achieve even in the ideal case (of course, without any cheating) because we cannot say anything about the limitation of deep learning approaches without this kind of investigation. Actually, we think this type of analysis is becoming popular in the statistics community. Moreover, recent intensive studies about convergence properties of SGD for deep learning implies that it is not so much vacuous to assume we can achieve the global optimal solution with a good generalization guarantee. In addition, we can also involve the optimization error in our estimation error bound, but we have omitted that for better readability.\"}",
"{\"title\": \"Reply from author (2/2)\", \"comment\": \"(3)\", \"q\": [\"Given the technical nature of the paper, the authors have done a good job with the presentation. However, in some places the discussion is very equation driven. For e.g. in the 2nd half of page 4, it might help to explain many of the quantities presented in plain words.\"], \"a\": \"We have added some text explanations in page 4. Due to space limitation, we could not give full expositions. But, we also added some explanations for the meaning of the approximation error rate and its relation to the depth, width and sparsity after Proposition 1 and Theorem 1.\"}",
"{\"title\": \"Reply from authors (1/2)\", \"comment\": \"Thank you for your suggestive comments. We have revised our paper according to your comments, though unfortunately some of them could not be addressed due to the lack of space.\\n\\n(1)\", \"q\": \"While the mixed Besov spaces enables better bounds, the condition appears quite strong. In fact, the lower bound is better than for traditional Holder/Sobolev classes. Can you please comment on how th m-Besov space compares to Holder/Sobolev classes? Also, can you similarly define mixed Holder/Sobolev spaces where traditional linear smoothers might achieve minimax optimal results?\", \"a\": \"Yes, the condition for the mixed Besov space is much stronger than the ordinary Besov space. Yes, we can define mixed smooth Holder/Sobolev space. They are defined just by setting p=q=infty or p=q=2. Hence, the mixed smooth Besov space is much wider class of mixed smooth Holder/Sobolev space. Roughly speaking, the mixed smooth Besov space consists of functions having form g(f_1(x_1),...,f_d(x_d)) where each f_i(x_i) is a function in a Besov space on [0,1] and g:R^d \\\\to R is a sufficiently smooth function. Then, we can see that the m-Besov space includes an additive model \\\\sum_{i=1}^d f_i(x_i) and a tensor model \\\\sum_r \\\\prod_{i=1}^d f_{r,i}(x_i) as special cases. \\nWe can also define an intermediate function class between the ordinary Besov space and the m-Besov space by taking a tensor product of B_{p,q}^s([0,1]^{d_1}), ..., B_{p,q}^s([0,1]^{d_K}) where d_1 + d_2 + ... d_K = d (if each d_i = 1, then it is reduced to the m-Besov space). We can also show a convergence rate which is between those of the m-Besov space and the Besov space, but we don't pursue this direction due to space limitation.\"}",
"{\"comment\": \"I am looking into the estimation error bound in Table 2 on Page 3.\\n\\nWe assume that \\\\beta = 3, u = 0.1, and the sample size is large. Let's say n~exp(d).\\n\\nThen we can reduce the bound to O(exp(-6d/7) * d^{0.88 d}).\\n\\nThe bound will blow up for large d.\\n\\nCould you please clarify your results?\", \"title\": \"Your bound has curse of sample size!\"}",
"{\"title\": \"Nice and Relevant Results\", \"review\": \"Summary:\\n========\\nThe paper presents rates of convergence for estimating nonparametric functions in Besov\\nspaces using deep NNs with ReLu activations. The authors show that deep Relu networks,\\nunlike linear smoothers, can achieve minimax optimality. Moreover, they show that in a\\nrestricted class of functions called mixed Besov spaces, there is significantly milder\\ndependence on dimensionality. Even more interestingly, the Relu network is able to\\nadapt to the smoothness of the problem.\\n\\nWhile I am not too well versed on the background material, my educated guess is that the\\nresults are interesting and relevant, and that the analysis is technically sound.\", \"detailed_comments\": \"==================\\n\\n\\nMy main criticism is that the total rate of convergence (estimation error + approximation\\nerror) has not been presented in a transparent way. The estimation error takes the form\\nof many similar results in nonparametric statistics, but the approximation error is\\ngiven in terms of the parameters of the network, which depends opaquely on the dimension\\nand other smoothness parameters. It is not clear which of these terms dominate, and\\nconsequently, how the parameters W, L etc. should be chosen so as to balance them.\\n\\n\\nWhile the mixed Besov spaces enables better bounds, the condition appears quite strong.\\nIn fact, the lower bound is better than for traditional Holder/Sobolev classes. Can you\\nplease comment on how th m-Besov space compares to Holder/Sobolev classes? Also, can\\nyou similiarly define mixed Holder/Sobolev spaces where traditional linear smoothers\\nmight achieve minimax optimal results?\", \"minor\": \"- Defn of Holder class: you can make this hold for integral beta if you define m to be\\nthe smallest integer less than beta (e.g. beta=7, m=6). Imo, this is standard in most\\ntexts I have seen.\\n- The authors claim that the approximation error does not depend on the dimensionality\\n needs clarification, since N clearly depends on the dimension. If I understand\\n correctly, the approximation error is in fact becoming smaller with d for m-Besov\\n spaces (since N is increasing with d), and what the authors meant was that the\\n exponential dependnence on d has now been eliminated. Is this correct?\\n\\nOther\\n- On page 4, what does the curly arrow notation mean?\\n- Given the technical nature of the paper, the authors have done a good job with the\\n presentation. However, in some places the discussion is very equation driven. For e.g.\\n in the 2nd half of page 4, it might help to explain many of the quantities presented in\\n plain words.\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\", \"rating\": \"8: Top 50% of accepted papers, clear accept\"}",
"{\"title\": \"Paper that establishes minimax optimal rates for deep network models over Besov spaces\", \"review\": [\"This paper makes two contributions:\", \"First, the authors show that function approximation over Besov spaces for the family of deep ReLU networks of a given architecture provide better approximation rates than linear models with the same number of parameters.\", \"Second, for this family and this function class they show minimax optimal sample complexity rates for generalization error incurred by optimizing the empirical squared error loss.\"], \"clarity\": \"Very dense; could benefit from considerably more exposition.\", \"originality\": \"afaik original. Techniques seem to be inspired by a recent paper by Montanelli and Du (2017).\", \"significance\": \"unclear.\", \"pros_and_cons\": \"This is a theory paper that focuses solely on approximation properties of deep networks. Since there is no discussion of any learning procedure involved, I would suggest that the use of the phrase \\\"deep learning\\\" throughout the paper be revised.\\n\\nThe paper is dense and somewhat inaccessible. Presentation could be improved by adding more exposition and comparisons with existing results.\\n\\nThe generalization bounds in Section 4 are given for an ideal estimator which is probably impossible to compute.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Are piecewise linear estimators really minimax optimal for piecewise polynomial signals?\", \"review\": \"This paper describes approximation and estimation error bounds for functions in Besov spaces using estimators corresponding to deep ReLU networks. The general idea of connecting network parameters such as depth, width, and sparsity to classical function spaces is interesting and could lead to novel insights into how and why these networks work and under what settings. The authors carefully define Besov spaces and related literature, and overall the paper is clearly written.\\n\\nDespite these strengths, I'm left with several questions about the results. The most critical is this: piecewise polynomials are members of the Besov spaces of interest, and ReLU networks produce piecewise linear functions. How can piecewise linear approximations of piecewise polynomial functions lead to minimax optimal rates? The authors' analysis is based on cardinal B-spline approximations, which generally makes sense, but it seems like you would need more terms in a superposition of B-splines of order 2 (piecewise linear) than higher orders to approximate a piecewise polynomial to within a given accuracy. The larger number of terms should lead to worse estimation errors, which is contrary to the main result of the paper. I don't see how to reconcile these ideas. \\n\\nA second question is about the context of some broad claims, such as that the rates achieved by neural networks cannot be attained by any linear or nonadaptive method. Regarding linear methods, I agree with the author, but I feel like this aspect is given undue emphasis. The key paper cited for rates for linear methods is the Donoho and Johnstone Wavelet Shrinkage paper, in which they clearly show that nonlinear, nonadaptive wavelet shrinkage estimators do indeed achieve minimax rates (within a log factor) for Besov spaces. Given this, how should I interpret claims like \\\"any linear/non-linear approximator\\nwith fixed N -bases does not achieve the approximation error ... in some parameter settings such as 0 < p < 2 < r \\\"?\\nWavelets provide a fixed N-basis and achieve optimal rates for Besov spaces. Is the constraint on p and r a setting in which wavelet optimality breaks down? If not, then I don't think the claim is correct. If so, then it would be helpful to understand how relevant this regime for p and r is to practical settings (as opposed to being an edge case). \\n\\nThe work on mixed Besov spaces (e.g. tensor product space of 1-d Besov spaces) is a fine result but not surprising.\", \"a_minor_note\": \"some of the references are strange, like citing a 2015 paper for minimax rates for Besov spaces that have been known for far longer or a 2003 paper that describes interpolation spaces that were beautifully described in DeVore '98. It would be appropriate to cite these earlier sources.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
SJMeTo09YQ | Guided Exploration in Deep Reinforcement Learning | [
"Sahisnu Mazumder",
"Bing Liu",
"Shuai Wang",
"Yingxuan Zhu",
"Xiaotian Yin",
"Lifeng Liu",
"Jian Li",
"Yongbing Huang"
] | This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of \textit{state-action permissibility} (SAP). Two types of permissibility are defined under SAP. The first type says that after an action $a_t$ is performed in a state $s_t$ and the agent reaches the new state $s_{t+1}$, the agent can decide whether the action $a_t$ is \textit{permissible} or \textit{not permissible} in state $s_t$. The second type says that even without performing the action $a_t$ in state $s_t$, the agent can already decide whether $a_t$ is permissible or not in $s_t$. An action is not permissible in a state if the action can never lead to an optimal solution and thus should not be tried. We incorporate the proposed SAP property into two state-of-the-art deep RL algorithms to guide their state-action exploration. Results show that the SAP guidance can markedly speed up training. | [
"deep reinforcement learning",
"guided exploration",
"RL training speed up"
] | https://openreview.net/pdf?id=SJMeTo09YQ | https://openreview.net/forum?id=SJMeTo09YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkx5YGLZx4",
"rJlX0bI5CX",
"SyxUxFjuAQ",
"HJgayKo_Am",
"Skg0LCPkRQ",
"BJgm5hW6nm",
"HJefUYx9nQ",
"HyeHZTbdn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544802945957,
1543295435162,
1543186670448,
1543186660977,
1542581846346,
1541377162598,
1541175626304,
1541049597030
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper773/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper773/Authors"
],
[
"ICLR.cc/2019/Conference/Paper773/Authors"
],
[
"ICLR.cc/2019/Conference/Paper773/Authors"
],
[
"ICLR.cc/2019/Conference/Paper773/Authors"
],
[
"ICLR.cc/2019/Conference/Paper773/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper773/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper773/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a simple and interesting idea to improve exploration efficiency, using the notion of action permissibility. Experiments in two problems (lane keeping, and flappy bird) show that exploration can be improved over baselines like DQN and DDPG. However, action permissibility appears to be very strong domain knowledge that limits the use in complex problems.\\n\\nRephrasing one of reviewers, action permissibility essentially implies that some one-step information can be used to rule out suboptimal actions, while a defining challenge in RL is that the agent needs to learn/plan/reason over multiple steps to decide whether an action is suboptimal or not. Indeed, the two problems in the experiments have such a property that a myopic agent can solve the tasks pretty well. The paper would be stronger if the AP function can be defined for more common RL benchmarks, with similar benefits demonstrated.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea, but limited applicability\"}",
"{\"title\": \"Paper Revised and Uploaded\", \"comment\": \"We have revised our paper following your comments and addressed your concerns in the revised version. Please consider the revised version as a reference to our responses.\\n\\nWe thank you for reviewing our work and providing valuable feedbacks.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank you for your valuable comments. Please find our response below.\", \"c1\": \"The amount of newly introduced hyperparameters is quite big and I am not sure whether the improved performance justifies the increased number of hyperparameters.\", \"r1\": \"Compared to the traditional exploration methods (e.g., epsilon-greedy exploration) in RL, we have only introduced three extra parameters: alpha_e, alpha_tr and delta_acc. The parameters, t_o and t_e are adopted from existing RL methods, which basically denote the number of steps in observation (during which epsilon is set as 1.0) and exploration (during which epsilon value is annealed from 1.0 to low value, here 0.01) phases respectively. The newly introduced parameters alpha_e, alpha_tr and delta_acc have their distinct objectives as follows: alpha_e and alpha_tr are the values of alpha in exploration and post exploration (training) phase respectively and control the degree by which agent listens to AP1 predictor. delta_acc helps in measuring the validation performance (reliability) of the AP1 predictor. We discussed more about hyper-parameter tuning strategies in Appendix of the revised version.\", \"c2\": \"How many trials have been used to generate the results? Fig3 says \\\"Avg. reward over past 100 training steps\\\". Does that mean only one trial and you average over the last 100 rewards? In order to be significant, at least 5 to 10 trials have to be used as deep RL is known to show highly varying results depending on the random seed. Please also report error bars.\", \"r2\": \"We have used 5 trials for each algorithm in the experiment and also reported the error curves in the revised version. Please see Section 5 and Additional Experimental Results section in Appendix.\", \"c3\": \"Why are there no learning curves for Flappy Bird?\", \"r3\": \"We have added the learning curves for Flappy Bird in the revised version (please find it in Appendix).\", \"c4\": \"The method for creating the action set if the selected action is permissible seems very adhoc for me, at least in the continuous action case. Would it not make more sense to include the gradient of the classifier into the actor update of DDPG such that the policy would also learn to avoid non-permissible actions? The presented method is in my opinions very hard to scale to higher dimensional action spaces (>2), which is quite a limitation of the approach.\", \"r4\": \"Thanks for pointing this out. In fact, we briefly mentioned about this in Appendix of our submitted version. For Flappy bird, we learn a shared network of AP1 predictor and the DDQN. Thus, the gradient update due to cross entropy loss optimization for training AP predictor also affects the learning of DDQN and helps in accelerated and stable training. We also apply similar ideas in DDQN-AP2 training, although it does not require an AP1 predictor. For steering control problem, as all DDPG-AP variants learn very quickly (see Figure 7 in Appendix) and have much less parameters to learn compared to that in Flappy bird, we did not feel the need to apply this idea to train the models, although, the idea is applicable to both cases.\\n \\nWe believe the proposed idea of SAP can be extended to multiple action dimensions as well. For example, considering autonomous driving, we can define three AP functions independently, one for steering control (as we did in our work) and other two for speed control like for break and acceleration. Their interaction will be quite interesting. We feel it will improve RL learning even further because the reduction in each dimension will result in much more reduction in the cross product. We leave the formulation of SAP for this multidimensional action space case as our future work (as mentioned in footnote 2).\", \"c5\": \"The description of Section 4, in particular of the construction of the candidate actions could be made more clear.\", \"r5\": \"We have updated section 4.3 with a footnote on the sampling of the candidate actions. Note that, we randomly sample an action at uniform from estimated permissible action space for the agent to explore. We found it to work better than probabilistic sampling over the action space by choosing the best action (greedily) based on AP1 prediction score. As AP1 predictor does not learn the value function, it is more logical to estimate the permissibility space and let RL find the best policy from that space.\", \"c6\": \"Results are only shown for a rather low dimensional action set (driving) and a discrete action example. 1-2 more illustrations where AP1 could be useful would be highly appreciated.\", \"r6\": \"In our work, we only deal with one dimensional continuous/discrete action space and evaluated our model based on that. Our main goal in this paper is to introduce the idea of SAP and empirically show that SAP is useful for RL speed up. We leave the formulation of SAP for the multidimensional case as our future work (as mentioned in footnote 2).\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank you for your valuable comments. Please find our responses below.\", \"c1\": \"The approach is also motivated by recent trends in meta-learning (of the binary predictor) and it would be good if the authors relate it to that (also citing some literature on meta learning).\", \"r1\": \"We have updated Sec 2. with recent works on meta-learning for RL.\", \"c2\": \"what would be a simple baseline for constraining the action-state space? One possibility could be to use the learned model to simulate the trajectories and based on that hard code the constraints? Any other ideas, task-specific?\", \"r2\": \"In fact, AP2 functions are actually constraints because they prevent some non-permissible actions from being taken without learning/prediction. However, our experiments showed that AP1, which needs learning, helps significantly.\", \"c3\": \"what is the relation to the model-based RL? In model-based RL we try to learn the transition probabilities from action to states. Could we impose any sparsity constraints on such a model to achieve a similar performance? While the proposed model is more elegant in that it allows the learning of the predictors on the fly, I feel there is a lack of comparisons with approaches that could easily be implemented using heuristics. Please comment.\", \"r3\": \"Thanks for pointing out this connection. However, our work is not about learning the transition probabilities. It is about learning to constrain the exploration space, a binary relation, permissible or not permissible. Our work is still model-free. We discuss this in Sec. 2 of the revised version. Accurately learning the model of the environment (specially, considering continuous state space problems) is often difficult in practice. Thus, the model free approach is widely used. SAP provides a scope for encoding human knowledge into model-free setting and leverage the knowledge in fast policy learning. The motivation is \\u2013 humans may not provide the optimal policy for a given state, but can specify a rule (AP function) that can guide the agent and help avoid repeated unnecessary trials causing wastage in time. The proposed idea of SAP and the AP function provides only the knowledge of action permissibility in model-free setting (as opposed to learning the complete model of the environment in model-based approach).\", \"c4\": \"could you be more precise about how often the prediction model is updated? What are potential adverse effects if this models keeps overfitting?\", \"r4\": \"We have added the validation curves of the AP prediction models in Appendix. During RL training, we do not need to train the AP predictor for all steps in the whole training period. After the AP predictor is trained for an initial number of steps, we noted that the validation curve saturates to a fairly high accuracy. Thus, we postpone the training of AP predictor, whenever the validation accuracy is above a threshold (delta_acc) and resume its training whenever the validation accuracy falls below delta_acc until it goes above delta_acc again.\", \"c5\": \"There are also limitations in terms of the number of hyperparameters that need be fine-tuned. I would like that the authors include one paragraph discussing in more detail the limitations of their approach.\", \"r5\": \"We have updated the draft with a discussion section to point out the limitations of our method (see Section 6) and also discuss the hyper-parameter tuning in Appendix.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank you for your valuable comments. Please find our responses below.\", \"c1\": \"Although the results of the experiments show that SAP helps to speed up RL, I think that the application of SAP is very narrow\\u2026.\", \"r1\": \"We agree that designing a good AP function for complex environments with too many parameters can be challenging, although not impossible. Also, note that, our work does not require user to provide the \\u201coptimal AP function\\u201d for a given problem. Several AP functions can be designed for a given problem (as we have discussed in the paper). We aim to provide a framework for existing Deep RL where if a fairly good AP function can be designed for an environment (often it\\u2019s not that difficult to come up with a fairly good AP functions for many environments), the proposed technique can result in drastic speed up. In other words, we show that the idea of SAP is useful. Designing a good AP function for a complex scenario requires more analysis and knowledge of the Application, but it certainly does not make the idea inapplicable. By no means do we say that for every task there is at least one AP function. As we stated, our goal here is to help speed up a class of problems.\\n\\nWe do believe that most robot functions involving movements and navigations have the SAP property, which is not a small application domain. We can often design a fairly good AP function for robot navigation using just common sense. Thus, by learning an AP predictor, we can quickly cut-off unnecessary action exploration and speed up the learning. We believe that is what we humans do as we smartly choose permissible actions rather than blindly try everything.\\n\\nMost of mentioned benchmarks in gym are for multidimensional action space. In this work, we only deal with one dimensional discrete/continuous action space. We leave the multidimensional case as our future work (also mentioned in footnote 1). Hence, these environments are not suitable for our experiments.\", \"c2\": \"Even for the lane following task described in the paper, the AP1 function in eq. 5 is limited and eliminates many good solutions\\u2026\", \"r2\": \"As an RL problem can have multiple reward functions, it can have multiple AP functions as well, depending on the goal of the task in hand. It\\u2019s true that our proposed AP function (eqn. 5) will not work in more complex driving scenarios but we do not aim to do that in this paper at the moment. Solving the complete autonomous driving problem is out of the scope of this work. Rather, we use a specific task (lane keeping) as our test bed to prove our hypothesis - the idea of SAP is useful to speed up RL. Thus, we don't focus on designing a complex AP function (to cover all cases).\\n\\nRegarding the suggested strategy of \\u201cdriving to the outer side of the lane before the turn, cut to the inner side at the turn,\\u201d we have a different opinion. We believe that is a speed control problem, which we do not study in this paper. In real life, this happens normally because we did not slow down enough at the turn which forces us to go to the outer lane. At least, that is the case for me. If the speed is also controlled by the RL, this scenario should be avoided because it is quite dangerous unless there is no car in the outer lane. Thus, in the RL learning phase, this kind of behavior should be penalized in speed control policy learning. This scenario could happen when the angle is so sharp that it is impossible to turn without cutting into the other lane (e.g., at some U-turn locations), but in that case, an autonomous car system normally will generate a new virtual lane for the car to follow (we have worked on real-life self-driving cars in the field). Another option is just to turn the steering wheel with the maximum angle possible. In both cases, the proposed RL framework still works.\", \"c3\": \"The idea of AP1 is somewhat contradictory to the philosophy of reinforcement learning\\u2026\", \"r3\": \"The purpose of AP function is only to cut off exploration space for given a state, and enabling RL to not explore non-permissible actions in similar states again and again. In particular, it estimates a permissible action space in a given state and prioritize exploration of those actions in that state compared to the non-permissible ones. There may be multiple permissible actions in a given state to choose from. But, SAP does not tell you which one is optimal at that point. Rather, SAP only tells you which one you should definitely avoid exploring, as there is a better option (action) available in that state to explore. And, it\\u2019s the RL\\u2019s job to find out the optimal policy (optimal action) from the permissible action space in the long run. Thus, as we are not chopping off any optimal solution in AP-based guidance [note, even the RL agent always explores non-permissible actions with (1- alpha) probability], we believe the idea of SAP is not contradictory to RL. Similar to human driving, we do not try all possible options as we can predict what actions are definitely not good.\"}",
"{\"title\": \"A constrained learning of permissable action-state space for speeding up RL\", \"review\": \"The authors introduce an approach for constraining the action-state space of RL algorithms, with the premise to speed up their learning. To this end, two types of constraints are introduced, coupled and embedded into the traditional policy learning for RL. The main idea of using a binary predictors for predictions of permissible actions leading to desired states is interesting and novel. It is an intuitive approach for constraining the space and the authors showed in their experiments that it leads to significant speed up in learning of two common RL methods (DDQN and DDPG). The approach is also motivated by recent trends in meta-learning (of the binary predictor) and it would be good if the authors relate it to that (also citing some literature on meta learning).\\n\\nWhile I am in favor of accepting this paper, I think there are several aspects that need be commented on/addressed:\\n\\n- what would be a simple baseline for constraining the action-state space? One possibility could be to use the learned model to simulate the trajectories and based on that hard code the constraints? Any other ideas, task-specific?\\n\\n- what is the relation to the model-based RL? In model-based RL we try to learn the transition probabilities from action to states. Could we impose any sparsity constraints on such a model to achieve a similar performance. While the proposed model is more elegant in that it allows the learning of the predictors on the fly, I feel there is a lack of comparisons with approaches that could easily be implemented using heuristics. Please comment. \\n\\n- could you be more precise about how often the prediction model is updated? What are potential adverse effects if this models keeps overfitting?\\n\\nThere are also limitations in terms of the number of hyperparameters that need be fine-tuned. I would like that the authors include one paragraph discussing in more detail the limitations of their approach.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A simple but nice idea. However, there are issues with the algorithm in the continuous action case and the evaluation could be more exhaustive.\", \"review\": [\"The paper introduces permissible actions to reinforcement learning problems. A action is non-permissible if it is known to not lead to the optimal solution. The agent can, after executing an action a_t in state s_t and ending up in s_t+1, estimate whether the action is a_t is non-permissible. This data is used to train a new classifier that predicts the permissibility of an action in a state. The exploration of the RL algorithm can now be guided by the permissibility estimate, i.e., non-permissible actions are not executed.\", \"The paper is well written and presents a simple, but promising idea to simplify reinforcement learning methods. I so far have not seen the definition of non-permissible actions in the literature so I believe this is novel and makes intuitively also sense, as permissible actions can be identified in many scenarios. However, the paper has a few issues that I want the authors to address:\", \"The amount of newly introduced hyperparameters is quite big and I am not sure whether the improved performance justifies the increased number of hyperparameters justifies.\", \"How many trials have been used to generate the results? Fig3 says \\\"Avg. reward over past 100 training steps\\\". Does that mean only one trial and you average over the last 100 rewards? In order to be significant, at least 5 to 10 trials have to be used as deep RL is known to show highly varying results depending on the random seed. Please also report error bars.\", \"Why are there no learning curves for Flappy Bird?\", \"The method for creating the action set if the selected action is permissible seems very adhoc for me, at least in the continuous action case. Would it not make more sense to include the gradient of the classifier into the actor update of DDPG such that the policy would also learn to avoid non-permissible actions? The presented method is in my opinions very hard to scale to higher dimensional action spaces (>2), which is quite a limitation of the approach.\", \"The description of Section 4, in particular of the construction of the candidate actions could be made more clear.\", \"Results are only shown for a rather low dimensional action set (driving) and a discrete action example. 1-2 more illustrations where AP1 could be useful would be highly appreciated.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The application of SAP seems very narrow.\", \"review\": \"This paper proposed the concept of state-action permissibility (SAP). Given a user-defined type 1 SAP function, the algorithm learns a classifier to predict whether an action at a given state is permissible or not. Based on this prediction, the reinforcement learning (RL) algorithms can limit the exploration only to the permissible actions, and thus greatly reduce the cost of learning. The proposed algorithms are tested on two simple tasks, both of which have the same flavor of following a predefined track.\\n\\nAlthough the results of the experiments show that SAP helps to speed up RL, I think that the application of SAC is very narrow. It is extremely difficult to define an AP1 function in general. For example, for most of the OpenAI gym environments (such as half-cheetah, ants or humanoid), it is not clear to me how to manually define an AP1 function. It would be more convincing if the paper can apply the proposed techniques to some of the benchmark OpenAI gym environments.\\n\\nEven for the lane following task described in the paper, the AP1 function in eq. 5 is limited and eliminates many good solutions. It constrains that the action should not lead to more deviations to the center line in the next time step. This greedy constraint will not work in more interesting driving scenarios. For example in a sharp turn, if the curvature of the lane is too large for the car to follow, a common strategy (that can be learned by vanilla RL algorithms) is to first drive to the outer side of the lane before the turn, cut to the inner side at the turn and exit the turn to the outer side. This optimal solution to negotiate a tight turn is completely eliminated by the user-defined AP1 function (eq. 5).\\n\\nThe idea of AP1 is somewhat contradictory to the philosophy of reinforcement learning. AP1 is a greedy decision based on the next step while RL optimizes for the accumulated reward over many steps. RL allows taking an action that will sacrifice the immediate reward (e.g. deviate from the center line of a lane) in the next step but can accumulated more reward in the long run (successfully drive along a tight turn). In most of cases, by looking at the next state, it is just not possible to predict whether a specific action cannot lead to the optimal long-term reward (SAP).\\n\\nFor the above reasons, I think that the application of SAP would be very narrow, especially for reinforcement learning. I would not recommend accepting this paper at this time.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HyllasActm | End-to-End Learning of Video Compression Using Spatio-Temporal Autoencoders | [
"Jorge Pessoa",
"Helena Aidos",
"Pedro Tomás",
"Mário A. T. Figueiredo"
] | Deep learning (DL) is having a revolutionary impact in image processing, with DL-based approaches now holding the state of the art in many tasks, including image compression. However, video compression has so far resisted the DL revolution, with the very few proposed approaches being based on complex and impractical architectures with multiple networks. This paper proposes what we believe is the first approach to end-to-end learning of a single network for video compression. We tackle the problem in a novel way, avoiding explicit motion estimation/prediction, by formalizing it as the rate-distortion optimization of a single spatio-temporal autoencoder; i.e., we jointly learn a latent-space projection transform and a synthesis transform for low bitrate video compression. The quantizer uses a rounding scheme, which is relaxed during training, and an entropy estimation technique to enforce an information bottleneck, inspired by recent advances in image compression. We compare the obtained video compression networks with standard widely-used codecs, showing better performance than the MPEG-4 standard, being competitive with H.264/AVC for low bitrates. | [
"learning",
"video compression",
"autoencoders",
"approaches",
"image compression",
"standard",
"revolutionary impact",
"image processing",
"state"
] | https://openreview.net/pdf?id=HyllasActm | https://openreview.net/forum?id=HyllasActm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xB2NM-lN",
"SJxA6Tj_Am",
"Bkx4_6oOAQ",
"ryePBTouC7",
"SyxxsJkbpX",
"B1lF8Q81T7",
"rye7Oj2uhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544787117027,
1543187909819,
1543187819802,
1543187774700,
1541627800081,
1541526352613,
1541094251000
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper771/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper771/Authors"
],
[
"ICLR.cc/2019/Conference/Paper771/Authors"
],
[
"ICLR.cc/2019/Conference/Paper771/Authors"
],
[
"ICLR.cc/2019/Conference/Paper771/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper771/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper771/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a neural network architecture for video compression. The reviewers point out lack of novelty with respect to recent neural compression works on static images, which the present paper extends by adding a temporal consistency loss. More importantly, reviewers point our severe problems with the metrics used to measure compression quality, which the authors promise to take into account in a future manuscript.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"problems with the employed metrics - lack of novelty over static image neural compression\"}",
"{\"title\": \"Response to reviewer comments\", \"comment\": \"Thank you for your feedback and suggestions. We have double checked our results, and confirmed that the values on the graphs are correct according to the used method. However, we agree that it is necessary to properly clarify how the evaluation was done. Additionally, we also agree that it would make more sense to make the comparison on the YCbCr space, as it is common by the video coding community. We will take your input into consideration in a revised manuscript.\"}",
"{\"title\": \"Response to reviewer comments\", \"comment\": \"Thank you for pointing out such important issues. We will take your input into consideration in a revised manuscript.\"}",
"{\"title\": \"Response to reviewer comments\", \"comment\": \"We agree that a more in-depth analysis could be performed, particularly regarding the effects of consistency loss, H.265 and Wu et al. ECCV 2018 (although the latter was difficult to be done on time, since it was only very recently published - September 2018). We will take this comments into consideration in a revised manuscript.\"}",
"{\"title\": \"Official review\", \"review\": \"This paper proposes an extension of deep image compression model to video compression. The performance is compared with H.264 and MPEG-4.\", \"my_main_concern_is_the_limited_technical_novelty_and_evaluation\": [\"The main idea of the architecture is extending 2D convolutions in image compression networks to 3D convolutions, and use skip connections for multi-scale modeling. The 2D to 3D extension is relatively straightforward, and multi-scale modeling is similar to techniques used in, e.g., [Rippel and Bourdev ICML 2017].\", \"The reconstruction loss and the entropy loss are commonly used in existing work. One new component is the \\u201ctemporal consistency loss\\u201d. However the impact of the loss is not analyzed in the Experiment section.\", \"The evaluation isn\\u2019t very extensive. Comparing the proposed method with state-of-the-art codecs (e.g., H.265) or other deep video compression codec (e.g., Wu et al. in ECCV 2018) would be valuable.\", \"Since the evaluation dataset is small, evaluation on multiple datasets would make the experiments more convincing.\", \"The evaluation is conducted in rather low-bitrate region only (MS-SSIM < 0.9), which is not common point of operation.\", \"Finally I agree with AnonReviewer2 on limited description of evaluation details.\", \"Overall I think this paper is not ready for publication yet.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official review\", \"review\": [\"This paper presents a spatiotemporal convolutional autoencoder trained for video compression. The basic model follows the logic of traditional autoencoders, with an intermediate quantizer:\", \"input -> convolutional neural network with a skip connection as an encoder -> quantizer -> transposed convolutional neural network with a skip connection as a decoder.\", \"As the quantizer is a non-differentiable operation, the paper proposes to follow (Toderici et al 2016, Balle et al, 2018) and cast quantization as adding uniform noise to the latent variables. The pre-quantized variables are modelled as Gaussians with variance that is predicted by a second \\\"hyperprior\\\" network dedicated to this task. The final model is trained to minimize three losses. The first loss minimizes the difference between the true frame pixel values and the predicted pixel values. The second loss minimizes the entropy of the latent codes. The third loss minimizes the difference between neighboring pixels in subsequent frames, ignoring those pixels that are not linked between frames. The model is trained on 10,000 videos from the Youtub-8M dataset and tested on 10 videos from the MCL-V database, with rather ok results.\", \"Generally, parts of the proposed approach sound logical: an autoencoder like architecture makes sense for this problem. Also, the idea of using uniform noise to emulate quantization is interesting. However, the paper has also weaknesses.\", \"The added novelty is limited and unclear. Conceptually, the paper is overclaiming. Quoting verbatim from the conclusion: \\\"Our work is, as far as we are aware, the first end-to-end learned video compression architecture using DL.\\\", while already citing few works that also rely on deep networks (Wu et al., 2018, Chen et al., 2016). In the related work section it is noted that these works are computationally heavy. However, this doesn't mean they are not end-to-end. The claims appear to be contradicting.\", \"The technical novelty is also limited. What is new is the combination of existing components for the task of video compression. However, each component in isolation is not novel, or it is not explained as such.\", \"Parts of the model are unclear. How is the mask M computed in equation (7)? Is M literally the optical flow between frames? If yes, what is the percentage of pixels that is zeroed out? Furthermore, can one claim the model is fully end to end, since a non-differentiable optical flow algorithm is used?\", \"The purpose of the hyperprior network is unclear. Why not use a VAE that also returns the variance per data point?\", \"Most importantly, it is not clear whether the model is trained as a generative one, e.g., with using a variational framework to compute the approximate posterior. If the model is not generative, how can the model be used for generation? Isn't it then that the decoder simply works for reconstruction of already seen frames? Is there any guarantee that the model generalizes well to unknown inputs? The fact that the model is evaluated only on 10 video sequences does not help with convincing with the generalization.\", \"The evaluation is rather weak. The method is tested on a single, extremely small dataset of just 10 videos. In this small dataset the proposed method seems to perform worse in the majority of compression ratios (bits per pixel). The method does seem to perform a bit better on the very low bits per pixel regime. However, given the small size of the dataset, it is not clear whether these results suffice.\", \"Only two baselines are considered, both hand-crafted codecs: H.264/AVC and MPEG-4. However, in the related work section there are works that could also be applied to the task, e.g., the aforementioned ones. Why aren't these included in the comparison?\", \"Although it is explained that the focus is on the very low bitrates, it is not clear what part of the model is designed with that focus in mind. Is this just a statement just so to focus on the part of the curve in the experiment where the proposed method is better than the reported baselines? Is there some intrinsic model hypothesis that makes the model suitable for low bit rates?\", \"In general, the paper needs to clarify the model and especially explain if it is (or not a generative one) and why. Also, a more extensitve arrays of experiments need to be executed to give a better outline of the methods capabilities and limitations.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"marginally novel, evaluation is very flawed, and incomplete\", \"review\": \"First off, the paper presents a relatively straight-forward extension to video from the work done in image compression. The work uses 3D volumes instead of 2D images, and exploits this structure by adding a secondary network to both the encoder/decoder.\\n\\nThe work is therefore *marginally* novel, but it is one of the first to propose neural methods for compressing video.\\n\\nMy biggest complaint about this paper, however, is about evaluation. I don't think it's possible to take this paper seriously as is, due to the fact that the metrics use in the evaluation are absolutely skipped.\\n\\nGiven that this is such a crucial detail, I don't think we can accept this paper as is. The metrics need to be described in detail, and they should follow some previously used protocols (see below). \\n\\nFor example, in libvpx and libaom (which is the current best performing method for video compression - AV1), there are two versions of PSNR: Global and Average PSNR respectively, and this is what gets reported in publications/standards meetings.\", \"global_psnr\": \"Compute MSE for the entire sequence combining Y, Cb, Cr components, and then compute PSNR based on the combined MSE.\", \"average_psnr\": \"Compute MSE for each frame combining Y, Cb, Cr, components; then compute PSNR for the frame based on the combined MSE and cap it to a max of 100. Then average the PSNR over all the frames.\\n\\nMPEG uses something like computing Average PSNR for each component (similar to what I mentioned above, but for each component) and then combine the Y-, Cb- and Cr- PSNRs using a weighted average. For 420 that will be equivalent to [4*MSE(y) + MSE(Cb) + MSE(Cr)/6. For 422 that will be equivalent to [2*MSE(y) + MSE(Cb) + MSE(Cr)/4. For 444 that will be equivalent to [MSE(y) + MSE(Cb) + MSE(Cr)/3. Additionally, when using YCbCr, the authors also need to refer to which version of the color standard is employed, since there are multiple ITU recommendations, all of which differ in how to compute the color space transforms.\\n\\nPlease note that video codecs DO NOT OPTIMIZE FOR RGB reconstruction (humans are much more sensitive to brightness details than they are to subtle color changes), so comparing against them in that color space puts them at a distinct disadvantage. In the video compression literature NOBODY reports RGB reconstruction metrics.\\n\\nPlease note that I computed the PSNR (RGB) for H.264, on the resized MCL-V dataset (640x360) as the authors proposed and I observed that the metric has been ***MISREPRESENTED*** by up to 5dB. This is absolutely not OK because it makes the results presented not be trustworthy at all.\\n\\nHere is the bpp/RGB PSNR that I obtained for H.264 (for completeness, this was computed as follows: used version 3.4.2 of ffmpeg, and the command line is \\\"ffmpeg -i /tmp/test.y4m -c:v h264 -crf 51 -preset veryslow\\\", tried many settings for crf to be able to get roughly the same bpp per video, then compute RGB PSNR for each frame per video, aggregate over each video, then average cross videos):\\n\\nBPP, Average PSNR RGB (again, not a metric I would like to see used, but for comparison's sake, I computed nonetheless -- also, note that these numbers should not be too far off from computing the average across all frames, since the video length is more or less the same)):\\n0.00719, 23.46\\n0.01321, 26.38\\n0.02033, 28.92\\n0.03285, 31.14\\n0.05455, 33.43\\n\\nSimilar comments go for MS-SSIM. \\n\\nLastly, it is unfair to compare against H263/4/5 unless the authors specify what profiles were used an what kind of bitrate targeting methods were used.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HkgxasA5Ym | Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors | [
"Danijar Hafner",
"Dustin Tran",
"Timothy Lillicrap",
"Alex Irpan",
"James Davidson"
] | Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify their prior. In particular, the common practice of a standard normal prior in weight space imposes only weak regularities, causing the function posterior to possibly generalize in unforeseen ways on inputs outside of the training distribution. We propose noise contrastive priors (NCPs) to obtain reliable uncertainty estimates. The key idea is to train the model to output high uncertainty for data points outside of the training distribution. NCPs do so using an input prior, which adds noise to the inputs of the current mini batch, and an output prior, which is a wide distribution given these inputs. NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs prevent overfitting outside of the training distribution and result in uncertainty estimates that are useful for active learning. We demonstrate the scalability of our method on the flight delays data set, where we significantly improve upon previously published results. | [
"uncertainty estimates",
"out of distribution",
"bayesian neural network",
"neural network priors",
"regression",
"active learning"
] | https://openreview.net/pdf?id=HkgxasA5Ym | https://openreview.net/forum?id=HkgxasA5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJluFssAJ4",
"rkeTqRVc0Q",
"Hkxh96Ec07",
"rJl-HdVcAX",
"r1xV4DNqAm",
"rkgJ1Jle6Q",
"r1e1xzyk6X",
"r1xkPlfY2m",
"BkxGzqKocX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544629119563,
1543290517361,
1543290260014,
1543288889352,
1543288620348,
1541566166817,
1541497318650,
1541115991174,
1539181065533
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper770/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper770/Authors"
],
[
"ICLR.cc/2019/Conference/Paper770/Authors"
],
[
"ICLR.cc/2019/Conference/Paper770/Authors"
],
[
"ICLR.cc/2019/Conference/Paper770/Authors"
],
[
"ICLR.cc/2019/Conference/Paper770/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper770/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper770/AnonReviewer3"
],
[
"~Andrey_Malinin1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper studies the problem of uncertainty estimation of neural networks and proposes to use Bayesian approach with noice contrastive prior.\", \"the_reviewers_and_ac_note_the_potential_weaknesses_of_experimental_results\": \"(1) lack of sufficient datasets with moderate-to-high dimensional inputs, (2) arguable choices of hyperparameters and (3) lack of direct evaluations, e.g., measuring network calibration is better than active learning.\\n\\nThe paper is well written and potentially interesting. However, AC decided that the paper might not be ready to publish in the current form due to the weakness.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited experiments\"}",
"{\"title\": \"Interesting paper!\", \"comment\": \"Thank you for pointing out your related recent work.\"}",
"{\"title\": \"Clarifications, differences to standard priors, new sensitivity analysis\", \"comment\": \"Thank you very much for your review and the constructive suggestions.\\n\\n[1. The authors propose to use so-called noise contrastive prior, but the actual implementation boils down to adding Gaussian noise to input points and respective outputs.]\\n\\nWe would like to clarify. NCP adds a term to the objective to match the epistemic variance that the model predicts for perturbed inputs to a prior value. The normal training objective using unmodified training inputs and labels is still present. While this can loosely be described as predicting noisy outputs for noisy inputs, there are technical differences. Mainly, NCP only targets the epistemic and not the aleatoric variance, thus encouraging uncertain and not necessarily noisy predictions. This encourages the model to separate epistemic and aleatoric variances, as needed to compute the information gain. Moreover, the KL term that NCP adds to the objective is computed in closed form, so there is no noise added to the outputs.\\n\\n[This seems to be the simplest possible prior in data space (well known for example in Bayesian linear regression).]\\n\\nWhile NCP is a natural idea, there certainly exist simpler data priors. For example, one could define a prior on the full predictive distribution, rather than targeting only the mean and epistemic variance. Moreover, one could use a data-independent prior (\\\\mu_y=0) that might improve uncertainty but degrade generalization. We explicitly avoid more complex priors that are difficult to optimize, such as OOD GANs (Lee et al., 2017). If we missed related papers from the Bayesian linear regression literature, we would be glad to be pointed to those to discuss them. It is a desirable property that NCP is easy to implement and optimize and yet clearly improves BNNs.\\n\\n[That would be nice if authors can comment on the differences of proposed NCP with standard homoscedastic priors in regression.]\", \"our_reply_above_mentions_two_differences_to_standard_homoscedastic_priors_in_regression\": \"NCP targets only the epistemic variance and it can be centered around training labels. In addition to defining the NCP prior, our paper contributes a practical algorithm for applying data priors to models that are optimized via variational inference. While parameter inference for linear regression has a closed form, this is not the case for BNNs. A key observation of our paper is that applying a data prior on training inputs is not enough in this case -- it must be applied beyond the training distribution if it should be enforced for unseen inputs. Our paper shows that this clearly improves the usefulness of uncertainty estimates for active learning.\\n\\n[2. The paper title mentions 'RELIABLE UNCERTAINTY ESTIMATES', but in fact the paper doesn't discuss the reliability of obtained uncertainty estimates directly.]\\n\\nWhile a direct evaluation of the predicted aleatoric and epistemic variances would be ideal, doing so quantitatively is difficult for models without closed-form posterior and multi-dimensional datasets. We decided for active learning experiments to go beyond the mainly qualitative analysis often conducted for new OOD methods. Active learning using the expected information gain is specifically appropriate in our case, as the noise and uncertainty predictions take opposing roles in this acquisition function. We point to Figure 1 for a qualitative evaluation on the 1D toy dataset that can be visualized. If the comment is mainly directed at the paper title, we are open to changing it to emphasize active learning.\\n\\n[3. The paper performs experiments basically on two datasets, which is not enough to obtain any reliable conclusions about the performance of the method. I recommend to consider much wider experimental evaluation, which is especially importan for active learning, which requires very accurate experimental evaluation]\\n\\nWe agree a wider experimental evaluation would further strengthen our conclusions. While more tasks are always desirable, we would like to point out that prior methods to improve uncertainty estimates have often been demonstrated on two datasets. For example, Gal et al. (ICML 2017) train on the MNIST and ISIC datasets and Lee et al. (ICLR 2018) train on the SVHN and CIFAR-10 datasets.\\n\\n[4. It is not clear how to choose hyperparameters (noise variances) in practice. The paper performs some sensitivity analysis with resepct to variance selection, but the study is again on one dataset.]\\n\\nWe conducted a new robustness analysis on the flights dataset for different noise distributions, shown in Figure 5 (see updated paper). We observe again that BBB+NCP is robust to the size of the input noise, supporting the conclusions from the toy dataset. NCP consistently improves RMSE of the BNN and yields its best NLPD for all noise sizes below 0.6.\\n\\nWe hope these comments clarified our paper submission and its connections to typical priors in regression.\"}",
"{\"title\": \"Accurate summary, hyper parameter choice, new sensitivity analysis\", \"comment\": \"Thank you very much for your review.\\n\\n[The paper is well written and makes for a nice read. I like the idea of using \\u201cpseudo\\u201d OOD data for encouraging better behaved uncertainties away from the data. It is nice to see that even simple schemes for generating OOD data (adding iid noise) lead to improved uncertainty estimates.]\\n\\nThank you. We were positively surprised by these results, too.\\n\\n[a) I like the sensitivity analysis presented in Figure 4, and it does show for the 1D sine wave the method is reasonably robust to the choice of \\\\sigma_x. However, it is unclear how problem dependent the choice of sigma_x is. [...] How was \\\\sigma_x chosen for the different experiments?]\\n\\nFor our experiments, we manually tuned \\\\sigma_x to work for both our BBB+NCP model and the ODC+NCP baseline model. We later performed the sensitivity analysis that showed that NCP is robust to this parameter.\\n\\nMoreover, we conducted a new sensitivity analysis on the flights dataset for different noise distributions, shown in Figure 5 (see updated paper). The experimental setup is the same as for our active learning experiment. We observe that BBB+NCP is robust to the size of the input noise, which supports the conclusions we drew from the toy dataset. NCP consistently improves RMSE and NLPD of the Bayesian neural network and yields its best NLPD for all noise sizes below 0.6. For the ODC baseline, we observe a trade-off: narrower input noise increases the regularization strength, leading to better NLPD but reduced RMSE.\\n\\n[b) It is also interesting that noise with a shared scale is used for all 8 dimensions of the flight dataset. Is this choice mainly governed by convenience \\u2014 easier to select one hyper-parameter rather than eight?]\\n\\nCorrect, we use the same noise variance for all input dimensions. This seems sufficient because we normalize all the input dimensions to have zero mean and unit variance, as is common practice. We did not find it necessary to try other parameters for the different input channels when experimenting with NCP.\\n\\n[c) Presumably, the predictive uncertainties are also strongly affected by both the weighting parameter \\\\gamma and the prior variance sigma^2_y . How sensitive are the uncertainties to these and how were these values chosen for the experiments presented in the paper?]\\n\\nFollowing a similar reasoning, we set sigma^2_y=1 since we normalize labels before training. This can be seen as an empirical prior. The scaling factor gamma for the data-space KL is generally problem dependent, analogously to the scaling factor beta for the weight-space KL. We found NCP to be a quite strong regularizer, alleviating the need for a weight-space prior (beta=0). The appendix includes an alternative derivation of NCP that sheds light on why this might be the case. We selected gamma=0.1 using a grid search over values 0, 0.01, 0.1, 1 on the low-dimensional regression task. The same parameter generalized well to the flights dataset.\\n\\n[d) It would be really interesting to see how well the approach extends to data with more interesting correlations. For example, for image data would using standard data-augmentation techniques (affine transformations) for generating OOD data help over adding iid noise.]\\n\\nThis is a very interesting idea, that we are considering to explore in the future. Our main reason to focus on low-dimensional inputs in this paper is that many image tasks are classification tasks, while we are more interested in regression problems. For regression, the flights dataset is a common benchmark with published baselines.\"}",
"{\"title\": \"Accurate summary, removed word deep from title\", \"comment\": \"Thank you very much for your review.\\n\\n[Interestingly, the method works by perturbing all data inputs instead of only the ones at the boundary of the training distribution.]\\n\\nCorrect, this is likely caused because the standard training objective outweighs the NCP loss inside of the training distribution. It is analogous to implicit priors for neural networks such as weight decay, which are also outweighed by the prediction loss inside of the training distribution.\\n\\n[Also, there is no need to sample outside of the input distribution in order to have accurate uncertainty estimates in that area.]\\n\\nAdding noise to the inputs often results in inputs that are outside of the training distribution. As you pointed out, our experiments indicate that it is enough to apply the prior on inputs near the training distribution.\\n\\n[The experimental section starts with a toy 1d active learning task that shows the advantage of good uncertainty estimates when selecting new data points. The authors also present a larger regression task (8 input dimensions and 700k data points in the training set) in which they obtain good performance compared to other models able to quantify epistemic uncertainty. In my opinion, the experiments do a good job at showing the capabilities of the algorithm.]\\n\\nThank you.\\n\\n[If anything, since the authors use the word \\\"deep\\\" in the title of the paper I would have expected some experiments on deep networks and a very large dataset.]\\n\\nThank you for this suggestion. We agree that removing the word \\\"deep\\\" makes the paper title more descriptive.\"}",
"{\"title\": \"An interesting approach to quantify uncertainty in neural networks\", \"review\": \"This paper presents an approach to obtain uncertainty estimates for neural network predictions that has good performance when quantifying predictive uncertainty at points that are outside of the training distribution. The authors show how this is particularly useful in an active learning setting where new data points can be selected based on metrics that rely on accurate uncertainty estimates.\\n\\nInterestingly, the method works by perturbing all data inputs instead of only the ones at the boundary of the training distribution. Also, there is no need to sample outside of the input distribution in order to have accurate uncertainty estimates in that area.\\n\\nThe paper is clear and very well written with a good balance between the use of formulas and insights in the text. \\n\\nThe experimental section starts with a toy 1d active learning task that shows the advantage of good uncertainty estimates when selecting new data points. The authors also present a larger regression task (8 input dimensions and 700k data points in the training set) in which they obtain good performance compared to other models able to quantify epistemic uncertainty. In my opinion, the experiments do a good job at showing the capabilities of the algorithm. If anything, since the authors use the word \\\"deep\\\" in the title of the paper I would have expected some experiments on deep networks and a very large dataset.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"nicely written, but experiments are very limited\", \"review\": \"The paper considers the problem of uncertainty estimation of neural networks and proposes to use Bayesian approach with noice contrastive prior.\\n\\nThe paper is nicely written, but there are several issues which require discussion:\\n1. The authors propose to use so-called noise contrastive prior, but the actual implementation boils down to adding Gaussian noise to input points and respective outputs. This seems to be the simplest possible prior in data space (well known for example in Bayesian linear regression). That would be nice if authors can comment on the differences of proposed NCP with standard homoscedastic priors in regression.\\n2. The paper title mentions 'RELIABLE UNCERTAINTY ESTIMATES', but in fact the paper doesn't discuss the realibility of obtained uncertainty estimates directly. Experiments only consider active learning, which allows to assess the quality of UE only indirectly. To verify the title one needs to directly compare uncertainty estimates with errors of prediction on preferably vast selection of datasets.\\n3. The paper performs experiments basically on two datasets, which is not enough to obtain any reliable conclusions about the performance of the method. I recommend to consider much wider experimental evaluation, which is especially importan for active learning, which requires very accurate experimental evaluation\\n4. It is not clear how to choose hyperparameters (noise variances) in practice. The paper performs some sensitivity analysis with resepct to variance selection, but the study is again on one dataset.\\n\\nFinally, I think that the paper targets important direction of uncertainty estimation for neural networks, but currently it is not mature in terms of results obtained.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper; being more careful about experiments would strengthen it further.\", \"review\": \"The paper considers the problem of obtaining reliable predictive uncertainty estimates. The authors propose noise contrastive priors \\u2014 the idea being to explicitly encourage high uncertainties for out of distribution (OOD) data through a loss in the data space. OOD data is simulated by adding noise to existing data and the model is trained to maximize the likelihood wr.t. training data while being close in the KL sense to a (wide) conditional prior p(y | x) on the OOD responses (y). The authors demonstrate that the procedure leads to improved uncertainty estimates on toy data and can better drive active learning on a large flight delay dataset.\\n\\nThe paper is well written and makes for a nice read. I like the idea of using \\u201cpseudo\\u201d OOD data for encouraging better behaved uncertainties away from the data. It is nice to see that even simple schemes for generating OOD data (adding iid noise) lead to improved uncertainty estimates. \\n\\nMy main concern about this work stems from not knowing how sensitive the recovered uncertainties are to the OOD data generating mechanism and the parameters thereof. The paper provides little evidence to conclude one way or the other. The detailed comments below further elaborate on this concern.\", \"detailed_comments\": \"a) I like the sensitivity analysis presented in Figure 4, and it does show for the 1D sine wave the method is reasonably robust to the choice of \\\\sigma_x. However, it is unclear how problem dependent the choice of sigma_x is. From the experiments, it seems that \\\\sigma_x needs to be carefully chosen for different problems, \\\\sigma^2_x < 0.3 seems to not work very well for BBB + NCP for the 1D sine data, but for the flight delay data \\\\sigma^2_x is set to 0.1 and seems to work well. How was \\\\sigma_x chosen for the different experiments?\\n\\nb) It is also interesting that noise with a shared scale is used for all 8 dimensions of the flight dataset. Is this choice mainly governed by convenience \\u2014 easier to select one hyper-parameter rather than eight? \\n\\nc) Presumably, the predictive uncertainties are also strongly affected by both the weighting parameter \\\\gamma and the prior variance sigma^2_y . How sensitive are the uncertainties to these and how were these values chosen for the experiments presented in the paper? \\n\\nd) It would be really interesting to see how well the approach extends to data with more interesting correlations. For example, for image data would using standard data-augmentation techniques (affine transformations) for generating OOD data help over adding iid noise. In general, it would be good to have at least some empirical validation of the proposed approach on moderate-to-high dimensional data (such as images).\\n\\n==============\\nOverall this is an interesting paper that could be significantly strengthened by addressing the comments above and a more careful discussion of how the procedure for generating OOD data affects the corresponding uncertainties.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Hello! :) Interesting work. You may find our work on predictive uncertainty estimation to be relevant.\", \"https\": \"//arxiv.org/pdf/1802.10501.pdf\", \"title\": \"Related work\"}"
]
} |
|
BJll6o09tm | Padam: Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks | [
"Jinghui Chen",
"Quanquan Gu"
] | Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method (Padam), which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter p, to achieve the best from both worlds. Experiments on standard benchmarks show that Padam can maintain fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks. | [
"adaptive gradient methods",
"padam",
"generalization gap",
"sgd",
"deep neural networks",
"closing",
"historical gradient information",
"learning rate"
] | https://openreview.net/pdf?id=BJll6o09tm | https://openreview.net/forum?id=BJll6o09tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylDyK9yxV",
"SJg-A0PUAm",
"SJxoiG0SAX",
"r1g-Hi6SAX",
"HJl_biarAm",
"BkeO0q6r0X",
"B1l_Ztqc3X",
"HkeE6748nm",
"Syl10Luwi7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544689886836,
1543040712771,
1543000739046,
1542998840958,
1542998783662,
1542998736234,
1541216511661,
1540928444303,
1539962566637
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper769/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper769/Authors"
],
[
"ICLR.cc/2019/Conference/Paper769/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper769/Authors"
],
[
"ICLR.cc/2019/Conference/Paper769/Authors"
],
[
"ICLR.cc/2019/Conference/Paper769/Authors"
],
[
"ICLR.cc/2019/Conference/Paper769/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper769/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper769/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Meta-review\", \"metareview\": \"This paper proposes a simple modification of the Adam optimizer, introducing a hyper-parameter 'p' (with value in the range [0,1/2]) parameterizing the parameter update:\\ntheta_new = theta_old + m/v^p\\nwhere p=1/2 falls back to the standard Adam/Amsgrad optimizer, and p=0 falls back to a variant of SGD with momentum.\", \"the_authors_motivate_the_method_by_pointing_out_that\": \"- Through the value of 'p', one can interpolate between SGD with momentum and Adam/Amsgrad. By choosing a value of 'p' smaller than 0.5, one can therefore use perform optimization that is 'partially adaptive'.\\n - The method shows good empirical performance.\\n \\nThe paper contains an inaccuracy, which we hope will be solved before the final version. The authors argue that the 1/sqrt(v) term in Adam results in a lower learning rate, and the authors argue that the effective learning rate \\\"easily explodes\\\" (section 3) because of this term, and that a \\\"more aggressive\\\" learning rate is more appropriate. This last point is false; the value of 1/sqrt(v) can be smaller or larger than 1 depending on the value of 'v', and that a decrease in value of 'p' can result in either an increase or decrease in effective learning rate, depending on the value of v. The value of 'v' is a function of the scale of loss function, which can really be arbitrary. (In case of very high-dimensional predictions, for example, the scale of the loss function is often proportional with the dimensionality of variable to be modeled, which can be arbitrarily large, e.g. in image or video modeling the loss function tends to be of a much larger scale than with classification.)\\n\\nThe authors promise to include a comparison to AdamW [Loshchilov, 2017] that includes tuning of the weight decay parameter. The lack of this experiments makes it more difficult to make a conclusion regarding the performance relative to AdamW. However, the methods offer potentially orthogonal (and combinable) advantages.\\n\\n[Loshchilov, 2017] https://arxiv.org/pdf/1711.05101.pdf\", \"recommendation\": \"Reject\", \"confidence\": \"4: The area chair is confident but not absolutely certain\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you very much for your suggestion and increasing your score. We will try using your suggested learning rate schedule. As for weight decay factor for AdamW, we used the parameter suggested in the original paper/github repository. You are right, it could be better since the test environment is not exactly the same. And we will also tune the weight decay factor for AdamW according to your suggestion. As you know, running deep learning experiments at this scale usually takes quite long time. That\\u2019s also why we took so long to respond. So we may not be able to include this new experimental results before the end of response period, but we will definitely add these additional experimental results in the camera-ready if needed.\"}",
"{\"title\": \"Comments\", \"comment\": \"1 and 3: OK\", \"2\": \"It's reassuring that Adam/Amsgrad are able to reduce training loss to zero. But I hope the authors can use more standard learning rate schedule (e.g., decay the learning rate at epoch 100 and 150.). Empirically, it's better to decay the learning rate after the training error/loss reaches a plateau (50 epoch is far from enough, as shown in figure 2).\\n\\nRegarding the performance of AdamW, I guess the authors didn't tune the weight decay parameter. I think AdamW with carefully chose weight decay factor can match SGD with momentum.\\n\\nOverall, I think the current version deserves to be read since it achieves basically as good generalization performance as SGD with only a simple modification. I increase the score to 6.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"1. Thank you for pointing this question. theta^* may not be unique, because even for convex functions, there may exist more than one global minimizer, unless the function is strictly convex. So we change the first equation in Section 4 to be \\\\theta^* \\\\in \\\\argmin_{\\\\theta \\\\in X} \\\\sum_{t=1}^T f_t(\\\\theta), which reflect the fact that theta^* is just one of those global minimizers. Our proof still holds and it does not affect the convergence result in Theorem 4.2 and Corollary 4.4.\\n\\n2. Thank you for your suggestion and we have further mentioned the reason for the non-convergence in Adam in the revision.\\n\\n3. Thank for your suggestion and we have moved different p plot to main text in the revision.\\n\\n4. Yes, Trying to adapt the value of p could be an interesting idea and future work. Thank you for your suggestion.\\n\\n5. Thank you for your suggestion and we have fixed the typos it in the revision.\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"1. We have added further comparison with AdamW in the revision as you suggested. As you can see from the plots in the revision, AdamW improves the generalization performance comparing with original Adam but there are still generalization gaps left behind, at least in our test settings. In contrast, Padam could achieve basically as good generalization performance as SGD with momentum.\\n\\n2. Thank you for your suggestion to train longer. We have followed your suggestion and rerun our CIFAR10/CIFAR100 experiments to 200 epochs as you suggested to make sure each baseline is well optimized. In particular, by this way, we successfully reduced the the training error/loss of Adam/Amsgrad to zero. We hope this would clear your concern.\\nFor Imagenet experiments, due to the large training test, 100 epochs is more than enough to converge, as can be seen in original papers for VGGNet, ResNet. \\n\\nRegarding the concept of generalization and test performance, we think generalization error and test error are usually referring to the same thing, while generalization gap is the difference between training error and test error. When we talk about generalization gap, it does not require the training error to be zero. Yet we agree that we should make the training error zero in our experiments, because this is more aligned with the practice of deep learning. Thus we have done that by running the training process for more epochs as you suggested. We hope this addressed your question.\\n\\n3. From the convergence analysis we presented for convex optimization, Padam may not outperform Adam, as you said. Yet the convergence analysis in the paper at least guarantees that Padam\\u2019s convergence is also not worse, at least as good as Adam. In fact, in a follow up work (https://arxiv.org/abs/1808.05671), they have proved that the convergence rate of Padam (when 0<p<=1/4) outperforms that Adam for noncovnex optimization. This further backups our experimental findings in this paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"1. You are right we indeed need one extra hyperparameter p to achieve the best performance. Yet we think it is more than just another \\u201chyperparameter\\u201d. Our work has a clear and strong motivation, as we described in the introduction, which is solving the very important generalization gap problem of adaptive gradient methods such as Adam. This is a long standing problem, which prevents the good performance of Adam for training morden deep neural networks. Our solution proposed in this paper has well solved this problem.\\n\\nIn terms of how to choose p, we recommend doing binary search over the grid {1/4, 1/8, 1/16}. We have added this suggestion in the revision. \\n\\nAs for the cost of tuning one extra parameter, we do not think there will be much trouble. If you think about Adam, which also introduce a new set of parameters (beta1,beta2) compared with its predecessors. Yet people did not complain this for Adam because the choice of these parameters tends to be quite stable and simple. The same applied here, in general case, 1/8 is a quite stable and effective choice of p as we have tested base on extensive experiments. Therefore the tuning process of p does not cost much.\\n\\n2. As you said, converging faster is just a plus, the key point is that we fix the the generalization gap of adaptive gradient methods for morden deep neural networks. This means a lot, because for small-scale neural networks, people would like to use adaptive gradient methods such as Adam for fast convergence. Now for large-scale deep neural networks, people now can also use adaptive gradient methods (Padam) since the generalization gap issue has been fixed. We believe our contribution in this paper is significant, especially for practitioners who used Adam a lot for small-scale neural networks, but had to give up using Adam for training morden large-scale neural networks, because of its unappealing generalization gap.\\n\\n3. We think it is hard to judge the contribution/novelty of one work significant or not based on whether the modification is simple or not. In some sense, Adam is also a simple modification given RMSprop and Adagrad. Yet Adam still stands out given its empirical performances. In this work, we also try to deliver a useful, practical algorithm that could fix a key issue (generalization gap) in existing adaptive gradient methods and therefore, save practitioners from the difficult choice of using SGD or Adam (Now they can all use Padam).\"}",
"{\"title\": \"simple generalization of AMSgrad/momentum, good test data/models, results not significant/compelling\", \"review\": \"The idea is simple and promising: generalize AMSgrad and momentum by hyperparameterizing the p=1/2 in denominator of ADAM term to be within [0,1/2], with 0 being momentum case. It was good to see the experiments use non-MNIST data (e.g. ImageNet, Cifar) and reasonable CNN models (ResNet, VGG). However, the experimental evaluation is not convincing that this approach will lead to significant improvements in optimizing such modern models in practice.\\n\\nOne key concern and flaw in their experimental work, which was not addressed, nor even raised, by the authors as a potential issue, is that their PADAM approach got one extra hyperparameter (p) to tune its performance in their grid search than the competitor optimizers (ADAM, AMSgrad, momentum). So, it is not at all surprising that given it has one extra parameter, that there will be a setting for p that turns out to be a bit better than 0 or 1/2 for any given data/model setup and weight initialization/trajectory examined. So at most this paper represents an existence proof that a value of p other than 0 or 1/2 can be best. It does not provide any guidance on how to find p in a practical way that would lead to wide adoption of PADAM as a replacement for the established competitor optimizers. As Figures 2 and 3 show, momentum ends up converging to as good a solution as PADAM, and so it doesn't seem to matter in the end that PADAM (or ADAM) might seem to converge a bit faster at the very beginning.\\n\\nThis work might have some value in inspiring follow-on work that could try to make this approach practical, such as adapting p somehow during training to lead to truly significant speedups or better generalization. But as experimented and reported so far, this paper does not give readers any reason to switch over to this approach, and so the work is very limited in terms of any significance/impact. Given how simple the modification is, the novelty is also limited, and not sufficient relative to the low significance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The contribution is relatively minor and an important baseline is missing in comparison.\", \"review\": \"This paper proposes a small modification to current adaptive gradient methods by introducing a partial adaptive parameter, showing improved generalization performance in several image classification benchmarks.\", \"pros\": [\"The modification is simple and easy to implement.\", \"The proposed method shows improved performance across different datasets, including ImageNet.\"], \"cons\": [\"Missing an important baseline - AdamW (https://arxiv.org/pdf/1711.05101.pdf) which shows that Adam can generalize as well as SGD and retain faster training. Basically, the poor generalization performance of Adam is due to the incorrect implementation of weight decay.\", \"The experimental results for Adam are not convincing. It's well-known that Adam is good at training but might perform badly on the test data. However, Adam performs much worse than SGD in terms of training loss in all plots, which is contrary to my expectation. I doubt that Adam is not tuned well. One possible explanation is that the training budget is not enough, first-order methods typically require 200 epochs to converge. So I suggest the authors training the networks longer (make sure the training loss levels off before the first drop of learning rate.).\", \"Mixing the concept of generalization and test performance. Note that generalization performance typically measures the gap between training and test error. To make the comparison fair, please make sure the training error is zero (I expect both training error and training loss should be close to 0 on CIFAR).\", \"In terms of optimization (convergence) performance, I cannot think of any reason that the proposed method would outperform Adam (or Amsgrad). The convergence analysis doesn't say anything meaningful.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Excellent contribution to solving an important problem\", \"review\": \"The authors propose a modification of existing adaptive variants of SGD to avoid problems with generalization. It is known that adaptive gradient algorithms such as Adam tend to find good parameter values more quickly initially, but in the later phases of training they stop making good progress due to necessarily low learning rates so SGD often outperforms them past a certain point. The suggested algorithm Padam achieves the best of both worlds, quick initial improvements and good performance in the later stages.\\n\\nThis is potentially a very significant contribution which could become the next state-of-the-art optimization method for deep learning. The paper is very clear and well-written, providing a good overview of existing approaches and explaining the specific issue it addresses. The authors have included the right amount of equations so that they provide the required details but do not obfuscate the explanations. The experiments consist of a comprehensive evaluation of Padam against the popular alternatives and show clear improvements over them.\\n\\nI have not evaluated the convergence theorem or its proof since this is not my area of expertise. One thing that stood out to me is that I don't see why theta* should be unique.\", \"some_minor_suggestions_for_improving_the_paper\": \"Towards the end of section 2 you mention a non-convergence issue of Adam. It would be useful to add a few sentences to explain exactly what the issue is.\\n\\nI would suggest moving the details of the grid search for p to the main text since many readers would be interested to know what's typically a good value for this parameter.\\n\\nWould it make sense to try to adapt the value of p, increasing it as the training progresses? Since that's an obvious extension some comment about it would be useful.\", \"on_the_bottom_of_page_6\": \"\\\"Figures 1 plots\\\" -> \\\"Figure 1 plots\\\".\\n\\nMake sure to protect the proper names in the bibliography so that they are typeset starting with uppercase letters.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryxepo0cFX | AntisymmetricRNN: A Dynamical System View on Recurrent Neural Networks | [
"Bo Chang",
"Minmin Chen",
"Eldad Haber",
"Ed H. Chi"
] | Recurrent neural networks have gained widespread use in modeling sequential data. Learning long-term dependencies using these models remains difficult though, due to exploding or vanishing gradients. In this paper, we draw connections between recurrent networks and ordinary differential equations. A special form of recurrent networks called the AntisymmetricRNN is proposed under this theoretical framework, which is able to capture long-term dependencies thanks to the stability property of its underlying differential equation. Existing approaches to improving RNN trainability often incur significant computation overhead. In comparison, AntisymmetricRNN achieves the same goal by design. We showcase the advantage of this new architecture through extensive simulations and experiments. AntisymmetricRNN exhibits much more predictable dynamics. It outperforms regular LSTM models on tasks requiring long-term memory and matches the performance on tasks where short-term dependencies dominate despite being much simpler. | [
"antisymmetricrnn",
"dynamical system view",
"dependencies",
"recurrent networks",
"tasks",
"recurrent neural networks",
"widespread use",
"sequential data",
"models"
] | https://openreview.net/pdf?id=ryxepo0cFX | https://openreview.net/forum?id=ryxepo0cFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Sklrd8Drx4",
"rJgebfDiTm",
"H1lcTWPj6m",
"ryghF-viT7",
"BJl23nE5h7",
"BklTkn-9hQ",
"BJexpMXwh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545070189291,
1542316535731,
1542316481859,
1542316419783,
1541192883724,
1541180389069,
1540989624039
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper768/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper768/Authors"
],
[
"ICLR.cc/2019/Conference/Paper768/Authors"
],
[
"ICLR.cc/2019/Conference/Paper768/Authors"
],
[
"ICLR.cc/2019/Conference/Paper768/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper768/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper768/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a novel idea with a compelling experimental study. Good paper, accept.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your comments and feedback.\\n\\n\\u201cConnection to prior work on orthogonal/unitary weights\\u201d? Thanks to the reviewer for bringing up another angle to connect this work to the prior work on orthogonal/unitary weights. While prior work reaches unitary Jacobian by constraining the weight matrices to be orthogonal/unitary with linear activation (the condition breaks if nonlinear activation is used), unitary Jacobian is reached in AntisymmetricRNN with the residual connection and constraining f\\u2019 to have imaginary eigenvalues. Unitary/orthogonal matrices have eigenvalues that lie on the unit circle. Antisymmetric matrices have eigenvalues of the form i\\\\lambda where \\\\lambda is arbitrary. This implies that the dimension of the possible transformation is much larger (the whole imaginary axis). Therefore, antisymmetric networks are more expressive than unitary ones. There are three advantages of our approach: 1) our condition can be easily achieved with the antisymmetric weight parameterization, with no computational overhead; 2) our condition takes nonlinear activations into consideration; 3) we empirically demonstrate that our formulation is more expressive than constraining the weight matrix to be orthogonal/unitary, as shown in Table 1. Moreover, we expect the connections between RNNs and the ODE theory to serve as a framework to inspire new RNN architectures in the future.\\n\\n\\u201cstore information in 'cycles'\\u201d. The behavior of the network in phase space is not repetitive. Similar manifolds are obtained when one looks at Lorenz systems for example, which is a simplification of the weather system. The phase diagrams suggest that the network never blows or decays but it is important to note that it does not repeat itself and samples different points in space.\\n\\nWe thank the reviewer for suggesting the tasks with more categories and varying sequence lengths. It is definitely worth studying the performance of AntisymmetricRNN on tasks such as copy and addition in future work.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the detailed comments and for pointing us to the latest work on the Spectral RNN and Fourier Recurrent Units. We have updated the paper accordingly.\\n\\n\\u201cdiffusion breaks the critical criterion\\u201d? We want to emphasize that the critical criterion describes a condition of stability of the underlying ODE w.r.t. initial values, while the diffusion term is necessary to stabilize the forward Euler discretization of the ODE. The AntisymmetricRNN does run into issues of vanishing gradient when the diffusion factor is set to large values, with order (1-c\\\\epsilon\\\\gamma)^t. Here \\\\epsilon is the step size of Euler discretization, \\\\gamma is the diffusion factor and c captures the derivatives of the hidden activation. Due to the small step size and the bounded derivatives, we find AntisymmetricRNN can tolerate a broad range of diffusion factors, as shown in the new Figure 2 added on the eigenvalues. \\n\\n\\u201cbegin the analysis with advanced RNN architectures that fit in this form\\u201d. We have added discussion of more advanced recurrent architectures that fit in the \\u201cresidual connection\\u201d form.\\n\\n\\u201cWhy sharing the weight matrix of gated units and recurrent units\\u201d? The weight matrix is shared between the gated units and recurrent units to satisfy the critical criterion. When the weight matrix is shared, the Jacobian matrix has the form of (D_1 + D_2) M, where D_1 and D_2 are diagonal matrices and M is an antisymmetric matrix. On the other hand, if the gated units and recurrent units use different weights, then the Jacobian matrix has the form of D_1 M_1 + D_2 M_2. Even if both M_1 and M_2 are antisymmetric, the eigenvalues of the Jacobian matrix can have real parts, thus breaking the criticality.\\n\\n\\u201cConduct experiment on language models and machine translation\\u201d? We conducted experiments on the pixel-by-pixel image tasks as the benchmark datasets for studying long-range dependence to demonstrate the effectiveness of the proposed method. We would like to study the performance of AntisymmetricRNN on language models and machine translation in future work.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your constructive feedback. The paper is updated with the suggested changes. In particular, a new Figure 2 visualizing the eigenvalues of the end-to-end Jacobian is included.\\n\\n\\u201cempirical verification of mitigation of vanishing/exploding gradient\\u201d? We included a new Figure 2 on the mean and standard deviation of the eigenvalues of the end-to-end Jacobian matrices for LSTMs and AntisymmetricRNNs with different diffusion constants. They are computed on the networks trained for the padded CIFAR10 dataset with time steps T in {100, 200, 400, 800}. As a quick summary, the eigenvalues for LSTMs quickly approaches zero as time steps increase, indicating vanishing gradients as they back-propagate in time. This explains why LSTMs fail to train at all on this task. AntisymmetricRNNs with a broad range of diffusion, on the other hand, have eigenvalues centered around 1. It is worth noting though as the diffusion constant increases to large values, AntisymmetricRNNs run into vanishing gradients as well. The diffusion constant plays an important role in striking a balance between the stability of discretization and non-vanishing gradients.\\n\\n\\u201chow do inputs affects the analysis\\u201d? Our analysis in Section 3 on the stability of an ODE is valid with inputs. In Equation 9 where we calculate the Jacobian matrix, the inputs only affect the diagonal matrix. As long as the diagonal matrix is bounded, which is true for derivatives of most activation functions, the Jacobian matrix still satisfies the critical criterion with inputs. Figure 5 in Appendix D shows the simulation with independent standard Gaussian input. Although the dynamics become slightly noisier comparing with those in Figure 1, the trend remains the same.\\n\\n\\u201cthe motivation of using gate for the antisymmetric RNN\\u201d? We see AntisymmetricRNN and AntisymmetricRNN w/ gating as discretizations of two different ODEs under the same theoretical framework. Gating provides a mechanism for the underlying ODE to have more degrees of freedom and to capture more complex dynamics. Experimental results show that AntisymmetricRNN performs better on pMNIST while AntisymmetricRNN w/ gating works well on the other tasks. \\n\\n\\u201cexpressivity of AntisymmetricRNN\\u201d? Structural constraint on the weight matrix could limit the expressivity of AntisymmetricRNN. However, we do not observe performance degradation in our empirical studies. We hypothesize it is due to over-parametrization in these networks. An AntisymmetricRNN can outperform other RNN models with fewer model parameters.\\n\\n\\u201chow easily can an antisymmetric RNN forgets information\\u201d. The diffusion term can be regarded as a mechanism for AntisymmetricRNNs to forget inputs in the past. As shown in the newly added Figure 2, when the diffusion constant increases, the eigenvalues of the end-to-end Jacobian decreases, resulting in shrinking gradient w.r.t. inputs in the past. In our current formulation, the diffusion factor is a constant across all the time steps and dimensions, but we could extend it to be time-dependent and/or data-dependent in future work. \\n\\n\\u201cbetter baseline in Cooijman et al., (2016)\\u201d? Thanks for the pointer. We added that in the footnote. We decide to keep the LSTM baseline reported by Arjovsky et al., (2016) because it has a higher accuracy on the more challenging pMNIST task than that in Cooijman et al., (2016) (92.6% vs 90.2%). We added the 92.6% accuracy in the footnote. Cooijman et al., (2016) is very relevant to our paper and we have added it to the related work section. It would be interesting to compare a \\u201cbatch-normalized AntisymmetricRNN\\u201d with the batch-normalized LSTM in future work.\"}",
"{\"title\": \"A novel RNN architecture\", \"review\": \"This paper introduces antisymmetric RNN, a novel RNNs architecture that is motivated through ordinary differential equation (ODE) framework. Authors consider a first order ODE and the RNN that results from the discretization of this ODE. They show how the stability criteria with respect to perturbation of the initial state results in an ODE in a trainability criteria for the corresponding RNN. This criteria ensures that there are no exploding/vanishing gradients. Authors then propose a specific parametrization, relying on antisymmetric matrix to ensure that the stability/trainability criteria is respected. They also propose a gated-variant of their architecture. Authors evaluate their proposal on pixel-by-pixel MNIST and CIFAR10 where they show they can outperforms an LSTM.\\n\\nThe paper is well-written and pleasant to read. However, while the authors argue that their architecture allows to mitigate vanishing/exploding gradient, there is no empirically verification of this claim. In particular, it would be nice to visualize how the gradient norm changes as the gradient is backpropagated in time, compare the gradient flows of Antisymmetric RNN with a LSTM or report the top eigenvalue of the jacobian for the different models.\\n\\nIn addition, the analysis for the antisymmetric RNN assumes no input is given to the model. It is not clear to me how having an input at each timestep affects those results?\\n\\nA few more specific questions/remarks:\\n-\\tExperimentally, authors find that the gated antisymmetric RNN sometime outperforms its non-gated counterpart. However, one motivation for the gate mechanism is to better control the gradients flow. It is unclear to me what is the motivation of using gate for the antisymmetric RNN ?\\n-\\tas the proposed RNN relies on a antisymmetric matrix to represent the hidden-to-hidden transition matrix, which has less degree of liberty, can we expect the antisymmetric RNN to have same expressivity as a standard RNN. In particular, how easily can an antisymmetric RNN forgets information ?\\n-\\tOn the pixel-by-pixel MNIST, authors report the Arjosky results for the LSTM baseline.\\nNote that some papers reported better performance for the LSTM baseline such as Recurrent Batch Norm (Cooijman et al., 2016) .\\n\\nAntisymmetric RNN appears to be well-motived architecture and seems to outperforms previous RNN variants that also aims at solving exploding/vanishing gradient problem. Overall I lean toward acceptance, although I do think that adding an experiment explicitly showing that the gradient does not explode/vanish would strengthen the paper. \\n\\n\\n* Revision\\n\\nThanks for your response, the paper new version address my main concerns, I appreciate the new experiment looking at the eigenvalues of the end-to-end Jacobian which clearly shows the advantage of the AntisymmetricRNN.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good paper with original work, experiments could be improved\", \"review\": \"In this paper, the authors provide a new approach to analyze the behavior of\\nRNNs by relating the RNNs with ODE numerical schemes. They provide analysis on\\nthe stability of forward Euler scheme and proposed an RNN architecture called\\nAntisymmetricRNN to solve the gradient exploding/vanishing problem. \\n\\nThe paper is well presented although more recent works in this direction\\nshould be cited and discussed. Also, some important issues are omitted and not\\nexplained. \\nFor example, the analysis begins with \\\"RNNs with feedback\\\" rather than vanilla\\nRNN, since vanilla RNN does not have the residual structure as eq(3). The\\nauthors should note that clearly in the paper. \\n\\nAlthough there are previous works relating ResNets with ODEs, such as [1],\\nthis paper is original as it is the first work that relates the stability of\\nODE numerical scheme with the gradient vanishing/exploding issues in RNNs. \\n\\nIn general, this paper provides a novel approach to analyze the gradient\\nvanishing/exploding issue in RNNs and provides applicable solutions, thus I\\nrecommend to accept it.\", \"detailed_comments\": \"The gradient exploding/vanishing issue has been extensively studied these\\nyears and more recent results should be discussed in related works.\\nAuthor mentioned that existing methods \\\"come with significant computational\\noverhead and reportedly hinder representation power of these models\\\". However\\nthis is not true for [2] which achieves full expressive power with\\nno-overhead. \\nIt is true that \\\"orthogonal weight matrices alone does not prevent exploding\\nand vanishing gradients\\\", thus there are architectural approaches that can\\nbound the gradient norm by constants [3]. \\n\\nThe authors argued that the critical criterion is important in preserving the\\ngradient norm. However, later on added a diffusion term to maintain the\\nstability of forward Euler method. Thus the gradient will vanish\\nexponentially w.r.t. time step t as: (1-\\\\gamma)^t. Could the authors provide\\nmore detailed analysis on this issue? \\n\\nSince eq(3) cannot be regarded as vanilla RNN, it would be better begin the\\nanalysis with advanced RNN architectures that fit in this form, such as\\nResidual RNN, Statistical Recurrent Units and Fourier Recurrent Units. \\n\\nWhy sharing the weight matrix of gated units and recurrent units? Is there any\\nother reason to do this other than reducing the number of parameters?\\n\\nMore experiment should be conducted on real applications of RNN, such as\\nlanguage model or machine translation. \\n\\n\\n[1] Yiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer\", \"neural_networks\": \"Bridging deep architectures and numerical differential\\nequations. In ICML, pp. 3276\\u20133285, 2018. \\n\\n[2] Zhang, Jiong, Qi Lei, and Inderjit S. Dhillon. \\\"Stabilizing Gradients for\\nDeep Neural Networks via Efficient SVD Parameterization.\\\" In ICML, pp.\\n5806-5814, 2018.\\n\\n[3] Zhang, Jiong, Yibo Lin, Zhao Song, and Inderjit S. Dhillon. \\\"Learning Long\\nTerm Dependencies via Fourier Recurrent Units.\\\" In ICML, pp. 5815-5823, 2018.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"capacity of long-term storage?\", \"review\": \"This is an interesting paper which proposes a novel angle on the problem of learning long-term dependencies in recurrent nets. The authors argue that most of the action should be in the imaginary part of the eigenvalues of the Jacobian J=F' of the new_state = old_state + epsilon F(old_state, input) incremental type of recurrence, while the real part should be slightly negative. If they were 0 the discrete time updates would still not be stable, so slightly negative (which leads to exponential loss of information) leads to stability while making it possible for the information decay to be pretty slow. They also propose a gated variant which sometimes works better.\\n\\nThis is similar to earlier work based on orthogonal or unitary Jacobians of new_state = H(old_state,input) updates, since the Jacobian of H(old_state,input) = old_state + epsilon F( old_state,input) is I + epsilon F'. In this light, it is not clear why the proposed architecture would be better than the partially orthogonal / unitary variants previously proposed. My general concern with this this type of architecture is that they can store information in 'cycles' (like in fig 1g, 1h) but this is a pretty strong constraint. For example, in the experiments, the authors did not apparently vary the length of the sequences (which would break the trick of using periodic attractors to store information). In practical applications this is very important. Also, all of the experiments are with classification tasks with few categories (10), i.e., requiring only storing 4 bits of information. Memorization tasks requiring to store many more bits, and with randomly varying sequence lengths, would better test the abilities of the proposed architecture.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
S1GkToR5tm | Discriminator Rejection Sampling | [
"Samaneh Azadi",
"Catherine Olsson",
"Trevor Darrell",
"Ian Goodfellow",
"Augustus Odena"
] | We propose a rejection sampling scheme using the discriminator of a GAN to
approximately correct errors in the GAN generator distribution. We show that
under quite strict assumptions, this will allow us to recover the data distribution
exactly. We then examine where those strict assumptions break down and design a
practical algorithm—called Discriminator Rejection Sampling (DRS)—that can be
used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of
Gaussians and on the state of the art SAGAN model. On ImageNet, we train an
improved baseline that increases the best published Inception Score from 52.52 to
62.36 and reduces the Frechet Inception Distance from 18.65 to 14.79. We then use
DRS to further improve on this baseline, improving the Inception Score to 76.08
and the FID to 13.75. | [
"GANs",
"rejection sampling"
] | https://openreview.net/pdf?id=S1GkToR5tm | https://openreview.net/forum?id=S1GkToR5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJekX6cVV4",
"BygDHYUtJN",
"BkeHrKU1kE",
"r1lSsOUk1E",
"BkgY088kJV",
"Ske86fcY0Q",
"BJgZuI6m0m",
"SyxH1nd7R7",
"r1e5iSqf6X",
"r1gYHA1-a7",
"SkeXZRk-Tm",
"SylUYa1bpQ",
"SklRqc0yTQ",
"rJgNPOjOnX"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1549212951166,
1544280383448,
1543625021000,
1543624861314,
1543624400916,
1543246526438,
1542866537380,
1542847452701,
1541739938189,
1541631553034,
1541631482798,
1541631358495,
1541560981662,
1541089372383
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper767/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/Authors"
],
[
"ICLR.cc/2019/Conference/Paper767/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper767/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"Enjoyed reading this paper. Even implemented it for my own GAN use case (on a different dataset than the ones used in this paper) and confirm it works well!\\n\\nP.S. There is an earlier related work on Variational Rejection Sampling [1], which uses rejection sampling for improving samples from the variational posterior in variational autoencoder models, using ideas similar to this paper. The difference with this work is that rejection sampling is performed in the latent space, whereas this paper focusses on the observed space. So depending on your application, it might be beneficial to use either approach!\\n\\n[1] Variational Rejection Sampling \\nAditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon\\nAISTATS 2018\", \"https\": \"//arxiv.org/abs/1804.01712\", \"title\": \"Nice and effective trick!\"}",
"{\"metareview\": \"The paper proposes a discriminator dependent rejection sampling scheme for improving the quality of samples from a trained GAN. The paper is clearly written, presents an interesting idea and the authors extended and improved the experimental analyses as suggested by the reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Improving GANs by rejection sampling\"}",
"{\"title\": \"Re: simpler rejection scheme\", \"comment\": \"Please see this comment: https://openreview.net/forum?id=S1GkToR5tm¬eId=SyxH1nd7R7 or the updated PDF for experimental results on (what we think is) the simpler rejection scheme you mention.\\n\\nPlease also let us know if there's anything else you think we can do to improve the paper quality.\"}",
"{\"title\": \"Re: comparisons w/ heuristic rejection schemes\", \"comment\": \"Thanks very much for the review, please see this comment: https://openreview.net/forum?id=S1GkToR5tm¬eId=SyxH1nd7R7 for some ablation experiments and comparisons with heuristic rejection schemes.\\n\\nLet us know if there's anything else you think we can do to improve the work.\"}",
"{\"title\": \"Thanks - here's our take on [1]\", \"comment\": \"Thanks for bringing [1] to our attention; we hadn't seen it.\\nWe'll first summarize our understanding of the algorithm from [1] (which we'll call IR for 'Importance Resampling')\\nand then we'll discuss differences.\\n\\nIR somehow computes importance weights for a set of samples using the Discriminator/Critic from a trained GAN.\", \"a_single_sample_is_drawn_as_follows\": \"N samples from the trained GAN are prepared and importance weights are computed.\\nA single one of the samples is then 'accepted' using a categorical distribution over N categories parameterized by the importance weights.\\n\\nThe differences (between IR and DRS and between our scientific evaluation and theirs) are:\\n\\n1. [1] don't continue to train D to approximate D^*.\\nWe theoretically motivate the importance of this, and we also show (in the new experiments we ran for the rebuttal) that this is important empirically.\\nThis difference may explain the small improvement given by IR (see below).\\n\\n2. [1] sample one image at a time given a set of N candidates instead of the probabilistic sampling as in DRS.\\nThat is, their acceptance ratio is controlled by N.\\nI don't think that this procedure will recover p_data given finite N?\\nIt's hard to say for sure without knowing more detail about how they are getting the importance weights.\\n\\n3. We add the 'gamma trick', which you already noted is crucial to making the algorithm work in practice.\\nImagine that the weights of n-1 samples are tiny (e.g. e-10) and the weight of one sample is close to 1.\\nNormalizing all of the samples by \\\\sum{w_i} does not make any difference in the weights and thus, this importance re-sampling would not do much.\\nThe 'gamma trick' changes the acceptance probabilities such that they cover the whole range of 0 to 1 scores.\\nThis effect was also illustrated in Figure 2-A.\\nThis results in a more efficient sampling scheme when acceptance probabilities for most of the samples are very small,\\nwhich happened in our ImageNet experiment (purple histogram of Figure 2-A).\\n\\n4. [1] don't really provide evidence that IR yields quantitative improvement.\\nIn the supplementary material, they show a single run on which the Inception score is changed from 7.28 to 7.42, an improvement of less than 2%.\\nOur work shows that DRS yields improvements of (61.44 / 52.34 ~ 17%) and (76.08 / 62.36 ~ 22%) respectively on the baseline and improved\\nversions of SAGAN[2] we used for experiments.\\nApart from [3] (a concurrent submission to ICLR), these results are the best achieved in the literature.\\nWe think it's reasonably to expect that DRS could improve the results from [3] as well.\\n\\n5. [1] seem to compare IR to a weak baseline in the experiment from the Supplementary Material.\\nThis experiment is (presumably) conducted on the unsupervised CIFAR-10 task.\\n7.42 is not only far from the state of the art at the time [1] was written (this is important because it gives evidence about whether IR can be 'stacked'\\nwith other improvements), but it's less than the reported performance of the main method from [1], which is given as 7.47 +/- 0.10.\\nThis is strange, because it suggests that the baseline for this experiment was not trained as well as the model in the main text (its performance of 7.28 is nearly\\n2 standard deviations worse).\\nFootnote 1 in the main text says 'We used a less well-trained model and picked our samples based on the importance weights to highlight the difference.',\\nbut it's unclear if this was also intentionally done in the supplementary material.\\n\\n6. [1] don't compute the FID of the accepted samples, so there is no way to know if diversity has been sacrificed for sample quality.\\nWe compute the FID and show that it has improved after DRS.\\n\\n7. [1] don't provide any theoretical analysis of IR.\\n\\n8. [1] don't include any illustrative toy experiments that suggest why resampling might work.\\nWe propose and give support (using the mixture of gaussians experiment) for the hypothesis that it's easier for the\\ndiscriminator to tell that certain regions of X are 'bad' than it is for the Generator to avoid spitting out samples in that region.\", \"ps\": \"We don't mean to be overly negative about [1].\\nWe understand that IR was not the primary contribution of that work.\\nWe just wish to emphasize the scope of the difference between the fraction of that work focusing on IR and our work.\", \"pps\": \"We saw this message after the deadline to modify the PDF.\\nWe will of course add this discussion to the final copy of the PDF when the time comes.\\n\\n[1] Chi-square generative adversarial network. In ICML, 2018.\\n[2] Self-Attention GAN\\n[3] Large Scale GAN Training for High Fidelity Natural Image Synthesis\"}",
"{\"title\": \"Comments\", \"comment\": \"Thanks for the interesting applications, which addressed my main concern.\\n\\nAlso, I recently found that the literature [1] has also mentioned a similar resampling idea. So a relative discussion should be added into the manuscript to make clear the difference.\\n\\n[1] C. Tao, L. Chen, R. Henao, J. Feng, and L. Carin. Chi-square generative adversarial network. In ICML, 2018.\"}",
"{\"title\": \"The paper has been updated to reference these results\", \"comment\": \"We have also added plots corresponding to the above values\"}",
"{\"title\": \"We have run the requested comparisons\", \"comment\": \"Reviewers 1 and 2 both mentioned that they would like to see comparisons to certain baselines.\\nWe have now performed such comparisons.\\nWe are working on adding them to the PDF, but I will discuss the results here in the meantime.\", \"we_evaluated_4_different_rejection_sampling_schemes_on_the_mixture_of_gaussians_dataset\": \"(1) Always reject samples falling below a hard threshold and DO NOT train the Discriminator to 'convergence'.\\n\\n(2) Always reject samples falling below a hard threshold and train the Discriminator to convergence.\\n\\n(3) Use probabilistic sampling as in eq 8 and DO NOT train the Discriminator to convergence.\\n\\n(4) Our original DRS algorithm, in which we use probabilistic sampling and train the Discriminator to convergence.\\n\\nIn (1) and (2), we were careful to set the hard threshold so that the actual acceptance rate was the same as in (3) and (4).\", \"broadly_speaking\": \"4 performs best\\n3 performs OK but yields less 'good samples' than 4.\\n2 yields the same number of 'good samples' as 3, but completely fails to sample from 5 of the 25 modes.\\n1 actually yields the most 'good samples' for the modes it hits, but it only hits 4 modes!\\n\\nThese results show that\\na) continuing to train D so that it can approximate D^* (which we have already motivated theoretically) is helpful in practice. \\nb) performing sampling as in eq 8 (which we also motivated theoretically) is helpful in practice. \\n\\nBelow we provide, for each method, the number of samples within 1, 2, 3 and 4 std deviations and the number of modes hit.\\nFor reference, we also compute these statistics for the ground truth distribution and the unfiltered samples from the GAN.\\n\\nWe would have liked to perform the same analysis on SAGAN, but we currently don't have access to resources that would\\nallow us to do this before the response deadline.\\n\\nDRS ABLATION STUDY\\nGROUND TRUTH\", \"centroid_coverage\": \"25\", \"within_1_std\": \"0.35277582572\", \"within_2_std\": \"0.657589599438\", \"within_3_std\": \"0.817463106114\", \"within_4_std\": \"0.897487702038\"}",
"{\"title\": \"Good paper!\", \"review\": \"This paper proposes a rejection sampling algorithm for sampling from the GAN generator. Authors establish a very clear connection between the optimal GAN discriminator and the rejection sampling acceptance probability. Then they explain very clearly that in practice the connection is not exact, and propose a practical algorithm.\\n\\nExperimental results suggest that the proposed algorithm helps the increase the accuracy of the generator, measured in terms of inception score and Frechet inception distance. \\n\\nIt would be interesting though to see if the proposed algorithm buys anything over a trivial rejection scheme such as looking at the discriminator values and rejecting the samples if they fall below a certain threshold. This being said, I do understand that the proposed practical acceptance ratio in equation (8) is 'close' to the theoretically justified acceptance ratio. Since in practice the learnt discriminator is not exactly the ideal discriminator D*(x), I think it is super okay to add a constant and optimize it on a validation set. (Equation (7) is off anyways since in practice the things (e.g. the discriminator) are not ideal). But again, I do think it would make the paper much stronger to compare equation (8) with some other heuristic based rejection schemes.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"FYI\", \"comment\": \"We have written individual replies to Reviews 2 and 3 (these are the only reviews at present).\\n\\nWe have also update the PDF to include a new figure (fig 6) on the effect of gamma. \\n\\nWe are working on making more updates to the draft for purposes of clarity.\"}",
"{\"title\": \"Thanks for the review!\", \"comment\": \"We thank the reviewer for his/her time and feedback. We appreciate the kind words relating to the clarity and comprehensiveness of our submission, and hope to address any remaining concerns the reviewer has here.\", \"other_applications\": \"(a) Suppose we\\u2019re designing molecules for drug discovery purposes using a generative model. \\nAt some point, we will have to physically test the molecules that we have designed, which could be costly.\\n If the discriminator can throw out some obviously unrealistic molecule designs, this will save us money and time.\\n(b) For text generation applications, a nonsensical generated sentence in a dialog system could be rejected by the discriminator, reducing the frequency of embarrassing mistakes. \\n(c) In RL applications, if we are predicting future states with a generative model, we could use this technique to throw out silly predictions, reducing the risk of taking a silly action predicated on those predictions. \\n(d) More generally, you could use DRS on models that are not GANs.\\n\\nADDRESSING D* ISSUE\\n\\nYou\\u2019re right about this - we will change the wording. We don\\u2019t do anything to *fix* the problem that we can\\u2019t actually compute D*, we just show that you don\\u2019t need to precisely recover D* to get good results. The first paragraph on page 5 speculates on why this might be so, and figures 4 and 5 provide evidence for this speculation.\", \"regarding_gamma\": \"We agree that gamma is an important hyperparameter, because it modulates the acceptance rate. \\nWe have already made the figure you propose and have updated the PDF to include it. It is now figure 6. \\nPlease let us know if there are other experiments that you think would\\nimprove the quality of the work.\"}",
"{\"title\": \"Thanks for the review!\", \"comment\": \"Thanks very much for the review.\\nWe think that there have been two misunderstandings here, one about the Gaussian Mixture experiment and one about the purpose of the quantity F_hat(x).\\nThese are our fault; we should have made the paper more clear and we are modifying the draft to do so.\\nIn the meantime, we will address both issues here. We use > for quotes.\", \"gaussian_mixture_experiment\": \"> - GAN setting: 10K examples are generated and reported in figure 3?\\nThis much is true.\\n\\n\\n> - DRS setting: 10K examples are generated, and submitted to algorithm in figure 1. For each batch, a line search sets gamma so that 95% of the examples are accepted. Thus only 9.5K are reported in figure 3.\\nThis part is not true.\\nYou probably got confused by the line 'We generate 10,000 samples from the generator with and without DRS.' which we agree is unclear. \\n\\nFirst, we generate as many samples as needed to yield 10K acceptances, so both plots have 10k dots on them.\\n\\nSecond, there is no line search.\\nEach example is given an acceptance probability p that is generated from substituting F_hat from equation 8 for F in equation 6.\\nThen, a pseudo-random number in [0,1] is compared with p to determine acceptance. \\nThus, for any given batch, the number of examples accepted is non-deterministic.\\nWe think that this point also relates to the misunderstanding regarding the purpose of F_hat.\\n\\nThird, gamma is subtracted from F.\\nSo setting gamma equal to the 95th %-ile value of F means that an example where F(x) is at the 95th %-ile will have a 50% chance of being accepted, because\\n1 / (1 + e^(-F_hat(x))) = 1 / (1 + e^0) = 1 / 2 in this case. \\nThe result is that around 23% of samples drawn from the generator made it into the final DRS plot, which means we had to draw a little less than 50k samples from the generator. \\n\\n\\n> If this is my understanding, then the comparison in Figure 3 in unfair, as DRS is allowed to pick and choose.\\nWe're unsure what you mean here.\\nIt's true in some sense that DRS is allowed to pick and choose, but from our perspective this is part of the definition of rejection sampling?\\nThe generator can't figure out how to stop yielding bad samples, but the discriminator can tell which samples are bad, so we can\\nthrow those out and get a distribution closer to the ground truth distribution at the cost of having to generate extra samples from the generator.\", \"purpose_of_f_hat\": \"> Let's jump to equation (8): compared to a simple use of the discriminator for rejection, it adds the term under the log\\nWe don't think this is correct - the log already exists and we just add the gamma and epsilon terms.\\nThe discussion after eq 5 shows that the acceptance probability p(x) is exp(D_tilde^*(x) - D_tilde^*(x^*)).\\nThe tildes are important, because they mean that we are operating not on the sigmoid output of D but on the logit that is passed to the sigmoid output.\\nThen we ask what F(x) would have to be s.t. 1 / (1 + e^(-F(x))) = p(x).\\nThis results in equation 7, *which already has the log term*.\\nThe only difference between F_hat and F is that we introduce the epsilon for numerical stability and the gamma to modulate the acceptance probability.\\n\\n> First order Taylor expansion of...\\nWhat you say here is true, but we are not thresholding. \\nWe think this is the root of the misunderstanding.\\nWe don't consider the hard thresholding algorithm here because it might deterministically reject certain samples for which D^* is low,\\nwhich means that we would never be able to actually draw samples from p_d, even in the idealized setting of section 3.1\\n\\nPlease let us know if this response answers all of your questions. \\nWe are happy to expand.\"}",
"{\"title\": \"Very well written paper, with excellent results, but experiments may be unfair, and a much simpler rejection scheme may work equally well.\", \"review\": \"his paper assumes that, in a GAN, the generator is not perfect and some information is left in the discriminator, so that it can be used to 'reject' some of the 'fake' examples produced by the generator.\\n\\nThe introduction, problem statement and justification for rejection sampling are excellent, with a level of clarity that makes it understandable by non expert readers, and a wittiness that makes the paper fun to read. I assume this work is novel: the reviewer is more an expert in rejection than in GANs, and is aware how few publications rely on rejection.\\n\\nHowever, the authors fail to compare their algorithm to a much simpler rejection scheme, and a revised version should discuss this issue.\\nLet's jump to equation (8): compared to a simple use of the dicriminator for rejection, it adds the term under the log.\\nThe basic rejection equation would read F(x) = D*(x) - gamma and one would adjust the threshold gamma to obtain the desired operating point. I am wondering why no comparison is provided with basic rejection? \\n\\nLet me try to understand the Gaussian mixture experiment, as the description is ambiguous:\\n- GAN setting: 10K examples are generated and reported in figure 3?\\n- DRS setting: 10K examples are generated, and submitted to algorithm in figure 1. For each batch, a line search sets gamma so that 95% of the examples are accepted. Thus only 9.5K are reported in figure 3.\\n- What about basic rejection using F(x) = D*(x) - gamma: how does it compare to DRS at the same 95% accept?\\n\\nIf this is my understanding, then the comparison in Figure 3 in unfair, as DRS is allowed to pick and choose.\\nFor completeness, basic rejection should also be added.\\n\\nGoing back to Eq.(8), one realizes that the difference between DRS rejection and basic rejection may be negligible.\\nFirst order Taylor expansion of log(1-x) that would apply to the case where the rejection probability is small yields:\\nF(x) = (D*(x) - D*_M) + exp(D*(x) - D*_M) \\n\\nx+ exp(x) is monotonous, so thresholding over it is the same as thresholding over x: back to basic rejection!\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A post-processing method to filter \\u2018good\\u2019 generated samples for GANs\", \"review\": \"This paper proposed a post-processing rejection sampling scheme for GANs, named Discriminator Rejection Sampling (DRS), to help filter \\u2018good\\u2019 samples from GANs\\u2019 generator. More specifically, after training GANs\\u2019 generator and discriminator are fixed; GANs\\u2019 discriminator is further exploited to design a rejection sampler, which is used to reject the \\u2018bad\\u2019 samples generated from the fixed generator; accordingly, the accepted generated samples have good quality (better IS and FID results). Experiments of SAGAN model on GMM toys and ImageNet dataset show that DRS helps further increases the IS and reduces the FID.\\n\\nThe paper is easy to follow, and the experimental results are convincing. However, I am curious about the follow questions.\\n\\n(1)\\tBesides helping generate better samples, could you list several other applications where the proposed technique is useful? \\n\\n(2)\\tIn the last paragraph of Page 4, I don\\u2019t think the presented Discriminator Rejection Sampling \\u201caddresses\\u201d the issues in Sec 3.2, especially the first paragraph of Page 5.\\n\\n(3)\\tThe hyperparameter gamma in Eq. (8) is of vital importance for the proposed DRS. Actually, it is believed the key to determining whether DRS works or not. Detailed analysis/experiments about hyperparameter gamma are considered missing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1My6sR9tX | Unsupervised Learning via Meta-Learning | [
"Kyle Hsu",
"Sergey Levine",
"Chelsea Finn"
] | A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods. | [
"unsupervised learning",
"meta-learning"
] | https://openreview.net/pdf?id=r1My6sR9tX | https://openreview.net/forum?id=r1My6sR9tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJe59a_XeE",
"Bye9Tru00X",
"HyeT4Mp5Rm",
"BkemmQw9Am",
"HJlNixAFAX",
"rkgGnSmOam",
"HJlTH6DZT7",
"Skxii3P-a7",
"rkxrw3v-6X",
"rkels4tJp7",
"SklSFCJyaQ",
"B1lYZ6Ht37",
"HJldF4T8hQ"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544945042293,
1543566785885,
1543324213460,
1543299867053,
1543262363916,
1542104490469,
1541664068973,
1541663906658,
1541663837154,
1541538967782,
1541500540818,
1541131520776,
1540965503782
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper766/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/Authors"
],
[
"ICLR.cc/2019/Conference/Paper766/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper766/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper766/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers largely agree that the paper proposes a novel and interesting idea for unsupervised learning through meta learning and the empirical evaluation does a convincing job in demonstrating its effectiveness. There were some concerns on clarity/readability of the paper which seem to have been addressed by the authors. I recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting idea with thorough empirical evaluation\"}",
"{\"title\": \"Thank you for your interest.\", \"comment\": \"Code for producing part of the results has been released, but for anonymity reasons, we will not link to it here. We will update the code and add a link to the code in the paper and here after the review process is complete.\"}",
"{\"comment\": \"Unsupervised meta-learning seems interesting and a new setting for few-shot learning in this area.\\n\\nAlso, this might be an interesting direction to follow.\\nWill the code be released after the decision?\", \"title\": \"Interesting work and a new few-shot setting\"}",
"{\"title\": \"Thank you for your review. We appreciate your thorough and accurate summary as well as your feedback.\", \"comment\": \"We have addressed your comments on presentation by specifying U(P) as the uniform distribution and revising the explanation in \\u201ctask generation for meta-learning\\u201d.\\n\\nWe agree that the entire pipeline consists of several hyperparameters, which we chose and fixed based on prior work and heuristics (Section 4.1, Appendix E). We found it to be straightforward to select these parameter values, suggesting that the algorithm is not particularly sensitive to their values.\"}",
"{\"title\": \"nice and simple idea with well carried and thorough empirical experiments\", \"review\": [\"summary\", \"The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. SoTA meta-learning frameworks (MAML and ProtoNet) typically require rather large labeled datasets and hand-specified task distributions to define a sequence of tasks on which the algorithms are trained on. This paper proposes to unsupervised generate the sequence of tasks using multiple partitions as pseudo labels via k-means and other clustering variants on the embedding space. Empirical experiments show the benefit of the meta-learning on the M-way K-shot image classification tasks. Also, \\u201csampling a partition from U(P)\\u201d on page 4, the U(P) notation seems not defined.\", \"Evaluation\", \"The writing and presentation of the paper are in general well carried, except some part seems a little unclear, taking me quite a while to understand. For example, in the \\u201ctask generation for meta-learning\\u201d paragraph on page 3, the definition of task-specific labels (l_n) is puzzling to me at first glance.\", \"The proposed task construction in an unsupervised manner for the meta-learning framework is indeed simple and novel.\", \"The empirical experiments are thorough and well-conducted with good justifications. The benefit of unsupervised meta-learning compared to simply supervised learning on the few-shot downstream tasks is shown in Table 1 and 2; Different embedding techniques have also been studied; the results of Oracle upper bound are also presented; task construction ablation is also shown.\", \"Unsupervised meta-learning consists of multiple components such as learning embedding space, clustering methods, and various choices within the meta-learning frameworks. This together consumes a lot of hyper-parameters and the choice can somehow seem heuristic.\", \"Conclusion\", \"In general, I like this paper especially the empirical analysis section. Therefore, I vote for accepting this paper.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper updated to address reviewer feedback\", \"comment\": [\"We have updated the paper with the following changes to address reviewer comments:\", \"combined sections 2.1 and 2.3, and sections 2.2 and 2.4 (R2)\", \"reduced redundancy in the exposition (R2)\", \"added more mathematical details to section 2 (R2)\", \"added comparison to clustering on pixels (R2)\", \"added further discussion of limitations of our method in the discussion (R2)\", \"provided more motivation and justification for our approach in section 2.2 (R1, R2)\", \"improved the clarity of the problem statement and its motivation in sections 1 and 2.1 (R1, R2)\", \"emphasized throughout the text that the downstream tasks we evaluate on at meta-test time are standard benchmark few-shot learning tasks (R1, R2)\", \"added a brief discussion on sampling clusters in section 2.2 (R3)\", \"added a set of experiments based on Prototypical Networks (R3)\", \"We would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper, or request additional changes that would alleviate their concerns. Thank you!\"]}",
"{\"title\": \"Thank you for your insightful comments and feedback!\", \"comment\": \"\\u201cAlthough only MAML was considered as the meta-learning algorithm, it would have been nice to consider one or more candidates to show that the proposed framework is generalizable. Still, I think the experiment is persuasive enough to expect that the algorithm would working well at practice.\\u201d\\nTo address your suggestion, we have updated the paper to add results (in Tables 1, 2, and 3) obtained with Prototypical Networks [1] as the meta-learner instead of MAML. We find that the improvement of CACTUs over the comparison methods still generally holds, with a few exceptions. We hypothesize the exceptions are due to a dependence of ProtoNets performance on matching train shot with test shot, i.e. on providing the meta-learner with tasks that have supervision commensurate to that expected in held-out tasks. We elaborate on this in the updated paper (\\u201cBenefit of Meta-Learning\\u201d in Section 4.2).\\n\\n\\u201cAlthough the problem of interest is non-trivial and important, the proposed algorithm can be seen as just a naive combination of clustering and meta-learning. It would have been great to see some clustering algorithm that was specifically designed for this type of problem.\\u201d\\nThe reviewer is correct in that some more sophisticated clustering methods may be better-suited for our method. We found that this simple procedure (with hyperparameter k fixed) worked surprisingly well across datasets and task structures, and did not see a need to make the method more complex. \\n\\n\\u201cEspecially, the proposed CACTUs algorithm relies on sampling without replacement from the clustered dataset in order to enforce \\\"balance\\\" of the labels among the generated task. This might be leading to suboptimal results since the popularity of each cluster (i.e., how much it represents the whole dataset) is not considered.\\u201d\\nIf we view k-means as the hard limit of a mixture of Gaussians and decompose the joint embedding-cluster distribution p(z,c)=p(c)p(z|c), one way the reviewer\\u2019s proposal can be realized is sampling from p(c). However, as we mention in Section 5, the datasets we consider (Omniglot and miniImageNet) are evenly balanced amongst classes, and we fear that comparing between sampling from U(c) and p(c) on these datasets may be misleading, as in general datasets can be heavily imbalanced. We leave a thorough evaluation of the question of how to better sample clusters for tasks to future work, but there are a couple of hints we can already think about. First, consider a toy imbalanced dataset for which, after clustering, there is one heavily populated cluster and four small ones. Because we have no guarantees about the meta-test distribution, it is \\u201csafer\\u201d for the meta-learner to learn to distinguish between all clusters equally well than to have the popular cluster dominate the meta-training task distribution. Second, prior work [2] also considers a similar question (\\u201cTrivial parametrization\\u201d, page 6), and concludes that uniform sampling over clusters is more suitable.\\n\\n\\u201cCACTUs seems to be relying on having random scaling of the k-means algorithm in order to induce diversity on the set of partitions being generated. I am a bit skeptical about the effectiveness of such a method for diversity. If this holds, it would be interesting to see the visualization of such a concept.\\u201d\", \"we_found_that_this_diversity_was_helpful_but_not_critical_for_our_method_to_perform_well\": \"compare the P=1 and P={50,100} entries in the tables of Appendix F. Other mechanisms for encouraging task diversity would be an interesting direction for future work, and we welcome any suggestions on this front!\\n\\n\\u201cWould there be a trivial generalization of the algorithm to semi-supervised learning?\\u201d\\nFor the scenario in which some labeled data, not necessarily from the same classes as in tasks from meta-test time, is available during meta-training, there are indeed some obvious extensions. One can: i) have some of the tasks be generated only from the labeled data and others from CACTUs, ii) encourage the calculation of partitions to respect the labeled data, and/or iii) use the labeled data as meta-validation to do early stopping and hyperparameter tuning. We leave this for future work.\\n\\nReferences\\n[1] Snell et al. NIPS 2017, https://papers.nips.cc/paper/6996-prototypical-networks-for-few-shot-learning\\n[2] Caron et al. ECCV 2018, https://arxiv.org/abs/1807.05520\"}",
"{\"title\": \"[1/2] Thank you for your review. Can you elaborate on your feedback?\", \"comment\": \"Thank you for your time in reviewing our work! We would like to improve the paper based on your feedback, and would benefit from a few clarifications on your part.\\n\\n\\u201cThe motivation is not clear. The proposed method artificially generates a number of classification tasks. But how to use such classifiers for artificially generated labels in real-world applications is not motivated. It is better to give a representative application, to which the proposed method fits.\\u201d\\nOur evaluation and results are on real-world image classification tasks first proposed by prior work (Omniglot: [1], miniImageNet: [2], CelebA: [3]) and used by virtually all few-shot learning works in the last few years [1, 2, 3, 4, 5, 6, 7, 8, 9]. The test tasks are not artificially generated, but are real few-shot image classification tasks. We give the general use-case of our method in the last sentence of Section 2.1. We have clarified these points in our revision. Do you still find a lack of representative application? If so, do you have suggestions for better evaluation tasks?\\n\\n\\u201cThe detail of the proposed method is not mathematically presented\\u201d, \\u201c \\u2026 more mathematical details can be included instead.\\u201d\\nWe have added the optimization objective of k-means to the paper. Are there any other parts of the method that require more formalism?\\n\\n\\u201cThe readability of this paper is not high as it is redundant or unclear at several points.\\nFor example, Sections 2.1, 2.3 and Sections 2.2, 2.4 can be integrated, respectively, and more mathematical details can be included instead.\\u201d\\nWe have implemented your suggested re-organization to reduce redundancy. Which portions of the text, specifically, remain redundant or unclear?\\n\\nReferences\\n[1] Santoro et al. ICML 2016, http://proceedings.mlr.press/v48/santoro16.pdf\\n[2] Ravi & Larochelle ICLR 2017, https://openreview.net/pdf?id=rJY0-Kcll\\n[3] Finn et al. NIPS 2018, https://arxiv.org/abs/1806.02817\\n[4] Vinyals et al. NIPS 2016, https://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning\\n[5] Munkhdalai et al. ICML 2017, https://arxiv.org/abs/1703.00837\\n[6] Finn et al. ICML 2017, https://arxiv.org/abs/1703.03400\\n[7] Snell et al NIPS 2017, https://papers.nips.cc/paper/6996-prototypical-networks-for-few-shot-learning\\n[8] Oreshkin et al. NIPS 2018,\", \"https\": \"//arxiv.org/abs/1805.10123\\n[9] Yoon et al. NIPS 2018, https://arxiv.org/abs/1806.03836\"}",
"{\"title\": \"[2/2] Additional responses to the reviewer\\u2019s points.\", \"comment\": \"\\u201cAlthough the paper discusses using unsupervised learning for meta-learning, only k-means is considered in the proposed method.\\u201d\", \"we_do_consider_multiple_types_of_unsupervised_learning\": \"this is described in \\u201cDifferent embedding spaces\\u201d and \\u201cTask construction\\u201d in Section 4. For the embeddings, we consider and evaluate four unsupervised learning methods/objectives covering discriminative clustering, generative modeling, interpolation, and information maximization. For constructing tasks from embeddings, we consider and evaluate random sampling and hyperplane slicing in addition to k-means.\\n\\n\\u201cwhy is the first embedding step required? Clustering can be directly performed on the give dataset D = {x_i}.\\u201d\\nGiven that the x_i in our experiments are images, we believe it is clear from intuition that clustering on x_i would not work well: distance metrics in pixel-space do not correspond well to semantic meaning. We will add this as a comparison in the paper. \\n\\n\\u201cThe proposed method includes several hyper-parameters. But how to set them in practice it not clear.\\u201d\\nThe hyperparameters associated with the embedding learning stage can be tuned on the unlabeled meta-validation split. For clustering, we fix the number of clusters k across all dataset/embedding/task difficulty combinations presented in the main text. We demonstrate that the number of partitions P is unimportant for our method: P=1 and P=50/100 (for miniImagenet and CelebA / Omniglot) both perform well (Section 4.2). There is ample justification of the hyperparameters used for the task construction: choose N, the number of classes in each task, by upper-bounding the number of classes expected to be seen in a downstream task, and choose K to be 1 (Section 4.1). As motivated in the first paragraph of Section 4.1, all other hyperparameters were selected based on prior work.\\n\\n>> \\u201cperformance is not theoretically analyzed.\\u201d\\nWe found that, given the generality of the problem statement, it was difficult to make headway on theoretical analysis. We therefore opted to prioritize a solid experimental evaluation of the proposed method. Historically, theoretical analysis has not been a requirement for high-quality contributions in this community. There are numerous examples of high-quality, impactful papers devoid of theoretical analysis or guarantees presented at ICLR in recent years [1, 2, 3, 4, 5].\\n\\nReferences\\n[1] Zoph et al. ICLR 2017, https://openreview.net/forum?id=r1Ue8Hcxg\\n[2] Karras et al. ICLR 2018, https://openreview.net/forum?id=Hk99zCeAb\\n[3] Jaderberg et al. ICLR 2017, https://openreview.net/forum?id=SJ6yPD5xg\\n[4] Lazaridou et al. ICLR 2017, https://openreview.net/forum?id=Hk8N3Sclg\\n[5] Ravi & Larochelle ICLR 2017, https://openreview.net/pdf?id=rJY0-Kcll\"}",
"{\"title\": \"Thank you for the feedback. Can you elaborate on your suggestions?\", \"comment\": \"Thank you for your comments. Our evaluation tests on few-shot Omniglot, miniImageNet, and CelebA classification datasets, which are a real-world few-shot image classification task proposed by [1,2,3] respectively, and evaluated in virtually all few-shot classification papers since 2016: [1,2,3,4,5,6,7,8,9]. We can of course evaluate our method on other problems as well, but the current tasks are real-world image datasets and problems that have been studied extensively in the literature, for which our method achieves excellent results. Are there particular additional datasets that the reviewer would prefer a comparison to? Or anything else we can do to address the concern about the motivation?\\n\\nWe would be happy to revise the problem statement and writing as per the reviewer's suggestions, though we would appreciate more specific pointers about what in particular is difficult to follow. The problem statement is quite simple: we aim to propose an algorithm whereby meta-learning can be used to acquire an efficient few-shot learning procedure without any hand-specified labels during meta-training. This problem is important for two reasons: (1) meta-learning currently relies on large labeled datasets, and in practice, the burden of obtaining such labeled datasets is a major obstacle to widespread use of meta-learning for few-shot classification, and (2) state-of-the-art unsupervised learning methods often neglect downstream use-cases, such as few-shot classification, leaving substantial room for improvement. Our work proposed a way to begin addressing these challenges, and compares extensively to four prior papers [10,11,12,13] and several ablations. Beyond updating the problem statement, are there important comparisons that we missed?\\n\\n[1] Santoro et al. ICML 2016, http://proceedings.mlr.press/v48/santoro16.pdf\\n[2] Ravi & Larochelle ICLR 2017, https://openreview.net/pdf?id=rJY0-Kcll\\n[3] Finn et al NIPS 2018, https://arxiv.org/abs/1806.02817\\n[4] Vinyals et al. NIPS 2016, https://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning\\n[5] Munkhdalai et al. ICML 2017, https://arxiv.org/abs/1703.00837\\n[6] Finn et al. ICML 2017, https://arxiv.org/abs/1703.03400\\n[7] Snell et al NIPS 2017, https://papers.nips.cc/paper/6996-prototypical-networks-for-few-shot-learning\\n[8] Oreshkin et al. NIPS 2018,\", \"https\": \"//arxiv.org/abs/1805.10123\\n[9] Yoon et al. NIPS 2018, https://arxiv.org/abs/1806.03836\\n[10] Donahue et al. ICLR 2017, https://arxiv.org/abs/1605.09782\\n[11] Caron et al. ECCV 2018, https://arxiv.org/abs/1807.05520\\n[12] Berthelot et al. arXiv 2018, https://arxiv.org/pdf/1807.07543\\n[13] Chen et al. NIPS 2016, https://arxiv.org/abs/1606.03657\"}",
"{\"title\": \"Interesting approach but still not finished\", \"review\": \"The paper proposes to employ metalearning techniques for unsupervised tasks. The authors construct tasks in an automatic way from unlabeled data and run meta-learning over the constructed tasks.\\n\\nAlthough the paper presents a novel approach and the experiments included in the work show promising results, in my opinion, the paper is still not mature. There are some importants problems:\\n* The motivation of the paper is weak. The authors include the problem statement as well as the definitions used in the paper without knowing what is the goal of the proposed algorithm. A clear example of a real problem where the proposed framework could be applied is necessary to motivate the work.\\n* The paper is difficult to read and follow. The paper is composed by a set of parts without many links. This makes difficult to read the paper to not very experienced readers. A running example could be useful to increase the readability of the work. In my opinion, the paper contains too much material for the length of the conference. In fact, some important information has been moved to the appendices. \\n*Experimental section is specially hard to follow. The authors want to solve too many questions in a short space. Comparisons with other related papers should be included.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Great paper tackling important problem with nice experiments\", \"review\": \"In this paper, the task of performing meta-learning based on the unsupervised dataset is considered. The high-level idea is to generate 'pseudo-labels' via clustering of the given dataset using existing unsupervised learning techniques. Then the meta-learning algorithm is trained to easily discriminate between such labels. This paper seems to be tackling an important problem that has not been addressed yet to my knowledge. While the proposed method/contribution is quite simple, it possesses great potential for future applications and deeper exploration. The empirical results look strong and tried to address important aspects of the algorithm. The writing was clear and easy to follow. I especially liked how the authors tried to exploit possible pitfalls of their experimental design.\", \"minor_comments_and_questions\": [\"Although the problem of interest is non-trivial and important, the proposed algorithm can be seen as just a naive combination of clustering and meta-learning. It would have been great to see some clustering algorithm that was specifically designed for this type of problem. Especially, the proposed CACTUs algorithm relies on sampling without replacement from the clustered dataset in order to enforce \\\"balance\\\" of the labels among the generated task. This might be leading to suboptimal results since the popularity of each cluster (i.e., how much it represents the whole dataset) is not considered.\", \"CACTUs seems to be relying on having random scaling of the k-means algorithm in order to induce diversity on the set of partitions being generated. I am a bit skeptical about the effectiveness of such a method for diversity. If this holds, it would be interesting to see the visualization of such a concept.\", \"Although only MAML was considered as the meta-learning algorithm, it would have been nice to consider one or more candidates to show that the proposed framework is generalizable. Still, I think the experiment is persuasive enough to expect that the algorithm would work well at practice.\", \"Would there be a trivial generalization of the algorithm to semi-supervised learning?\", \"-------\", \"I am satisfied with the author's response and changes they made to the text. I still think the paper brings significant contributions to the area, by showing that even generating the pseudo-tasks via unsupervised clustering method allows the meta-learning to happen.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach but motivation is not clear.\", \"review\": \"This paper proposes to construct multiple classification tasks from unsupervised data.\", \"quality\": \"The detail of the proposed method is not mathematically presented and its performance is not theoretically analyzed.\\nAlthough the proposed method is empirically shown to be superior to other approaches, the motivation is not clearly presented.\\nHence the overall quality of this paper is not high.\", \"clarity\": \"The readability of this paper is not high as it is redundant or unclear at several points.\\nFor example, Sections 2.1, 2.3 and Sections 2.2, 2.4 can be integrated, respectively, and more mathematical details can be included instead.\", \"originality\": \"The proposal of constructing meta-learning based on unsupervised learning seems to be original.\", \"significance\": [\"The motivation is not clear. The proposed method artificially generates a number of classification tasks. But how to use such classifiers for artificially generated labels in real-world applications is not motivated.\", \"It is better to give a representative application, to which the proposed method fits.\", \"There is no theoretical analysis on the proposed method.\", \"For example, why is the first embedding step required? Clustering can be directly performed on the give dataset D = {x_i}.\", \"Although the paper discusses using unsupervised learning for meta-learning, only k-means is considered in the proposed method.\", \"There are a number of types of unsupervised learning, including other clustering algorithms and other tasks such as outlier detection, hence analyzing them is also interesting.\", \"The proposed method includes several hyper-parameters. But how to set them in practice it not clear.\"], \"pros\": [\"An interesting approach to meta-learning is presented.\"], \"cons\": [\"Motivation is not clear.\", \"There is no theoretical analysis.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
r1lyTjAqYX | Recurrent Experience Replay in Distributed Reinforcement Learning | [
"Steven Kapturowski",
"Georg Ostrovski",
"John Quan",
"Remi Munos",
"Will Dabney"
] | Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games. | [
"RNN",
"LSTM",
"experience replay",
"distributed training",
"reinforcement learning"
] | https://openreview.net/pdf?id=r1lyTjAqYX | https://openreview.net/forum?id=r1lyTjAqYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkeeQlUelV",
"S1x50y_aRm",
"S1xbJJBj0Q",
"ryx6VXJoCm",
"r1e7BZpFAX",
"BJxpTUOFRm",
"S1gMv8pH07",
"BJebb8iB07",
"ryesUjTWAX",
"SkxrqYaZAm",
"H1eZZKTZAQ",
"SkxyvPTZ0X",
"Ske9lOYp27",
"r1lgURl52X",
"rkep84aYhQ",
"SJenv9e497",
"rylmsYeZ9m",
"Hkx94FeZ9X",
"HJgGyvfxqm",
"BJeijL3J5m",
"SJeCT3qk5Q",
"ryxcW6xRYm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment"
],
"note_created": [
1544736792501,
1543499730107,
1543356121023,
1543332661287,
1543258426662,
1543239364594,
1542997593898,
1542989305169,
1542736723448,
1542736268912,
1542736121383,
1542735703205,
1541408753566,
1541176904255,
1541162068863,
1538685539668,
1538488730928,
1538488625877,
1538430682009,
1538406050552,
1538399429753,
1538292994435
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper765/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper765/AnonReviewer1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper765/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a new distributed DQN algorithm that combines recurrent neural networks with distributed prioritized replay memory. The authors systematically compare three types of initialization strategies for training the recurrent models. The thorough investigation is cited as a valuable contribution by all reviewers, with reviewer 1 noting that the study would be of interest to \\\"anyone using recurrent networks on RL tasks\\\". Empirical results on Atari and DMLab are impressive.\\n\\nThe reviewers noted several weaknesses in their original reviews. These included issues of clarity, a need for more detailed ablation studies, and need to more carefully document the empirical setup. A further question was raised on whether the empirical results could be complemented with theoretical or conceptual insights.\\n\\nThe authors carefully addressed all concerns raised during the reviewing and rebuttal period. They took exceptional care to clarify their writing, document experiment details, and ran a large set of additional experiments as suggested by the reviewers. The AC feels that the review period for the paper was particularly productive and would like to thank the reviewers and authors.\\n\\nThe reviewers and AC agree that the paper makes a significant contribution to the field and should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Valuable insights on training reinforcement learning with recurrent neural networks at scale\"}",
"{\"title\": \"Re: All Reviewers\", \"comment\": \"Absolutely, thanks!\"}",
"{\"title\": \"Re: All Reviewers\", \"comment\": \"Thanks, very interesting! Could you please just add a short description of R2D2+ in Table 1's caption, for the final version?\"}",
"{\"title\": \"All Reviewers\", \"comment\": \"We thank all reviewers once again for the careful reading of the paper and the helpful comments.\\n\\nWe have updated our ablations, now including two life-loss ablations, and provided complete details on the feed-forward ablation including human-normalized scores and sample efficiency data.\\n\\nFinally, we have updated the paper with our own re-run of the IMPALA agent on DMLab with the new action-set and longer training time, for a fairer comparison with R2D2. To explore the potential of our agent further, we also added a version of R2D2 more closely matching the Deep IMPALA architecture (deep ResNet + asymmetric reward clipping). Both our re-run of IMPALA and R2D2 achieve new SOTA scores on DMLab-30. We intend to report all DMLab-30 scores at 10B environment frames, but have restricted to 5B frames for this revision as these runs have not all completed at the time of the revision deadline.\"}",
"{\"title\": \"RE: A thorough investigation...\", \"comment\": \"Thanks for the clarifications. Sorry, I had misunderstood your intent regarding ablations. I do realize that a full ablation would be quite time-consuming and costly, personally I'm fine without them -- though of course if you're willing to add more results I won't object ;)\\n\\nThere's one small thing I think you should be able to do easily, which would be to add \\\"R2D2, Feed-Forward\\\" to Fig. 8.\"}",
"{\"title\": \"Reviewer 3 additional comments\", \"comment\": \"We thank the authors for the provided additional details. After reading their responses and other reviewers comments, I upgrade my rating to 7.\"}",
"{\"title\": \"RE: A thorough investigation...\", \"comment\": \"Thank you very much for the repeated careful reading of the paper, and thank you for kindly improving your score!\\n\\nFollowing your suggestion, we have added full Atari-57 results including the respective median human-normalized score for the feed-forward ablation (Table 1). For the other ablations (discount, life loss signal, value function rescaling vs clipped rewards) we considered a full Atari-57 ablation to be excessively costly in terms of computational resources and were not intending to provide those (beyond the already included 5 games x 3 seeds each). We could run those for the final version of the paper, with a single seed each, if you consider this data to be especially valuable, but would otherwise opt to use the chosen 5 games as representative. \\n\\nYour observation about the lower sample efficiency compared to Ape-X, this is indeed a great remark, and we have added a paragraph in the appendix to address this. Because of the very different batch characteristics of our sequence-based replay (64x80 instead of 512x1), our learner runs at approximately 1/4 of the updates-per-second compared to Ape-X, which results in a smaller \\u2018replay ratio\\u2019 (expected number of times an experienced observation is being replayed): approximately 0.8 (R2D2) vs 1.3 (Ape-X). We believe this mostly explains the initial difference in sample efficiency that you pointed out. \\n\\nAs suggested, we included a footnote mentioning the results we obtained in personal communication with the IMPALA-PopArt paper authors. Regarding the performance difference you are pointing out, we note that we are only comparing to the IMPALA, not the PopArt-IMPALA reported in that publication (solid blue curve in their Fig. 4), as this seemed to be the most apples-to-apples comparison.\\n\\nWe have just uploaded a new revision with the above changes - a final revision including the life-loss ablation will be uploaded before the deadline.\"}",
"{\"title\": \"RE: A thorough investigation...\", \"comment\": \"Thanks for the reply and significant additional effort in the revision. I'm going to update my score to 8 to reflect this -- I'd actually be willing to increase it further if full experimental results were available already.\\n\\nRegarding experimental results, I very much appreciate the (soon-to-be-)full training curves on all 57 Atari games, but please also add the aggregated human-normalized median score (having it added to Fig. 8 for all ablation experiments would be ideal). By the way, in Fig. 8, having a lower sample efficiency than Rainbow / Reactor is indeed expected since these have much fewer actors (and thus collect data much slower), however the difference with Ape-X is a bit more surprising. Is it because the RNN is slower to train? Or maybe the higher discount factor? (another reason to add ablations to this plot!)\\n\\n\\\"We obtained the IMPALA results data from the authors of the cited paper\\\": in that case please mention it in the paper, since some of these results look worse than the published ones (I'm referring to previous comment \\\"Looking at the current arxiv version of their paper [https://arxiv.org/pdf/1809.04474.pdf], their Fig. 4 shows it goes above 70% in mean capped score, while your Table 1 reports only 61.5%\\\" ==> this may deserve being validated with the authors)\\n\\nI didn't have time to carefully re-read the revised paper, but I noticed a small typo in Fig. 1(a) where t+1 should be t+m.\\n\\nI also really hope you manage to open source (at least part of) your code in the near future.\\n\\nThanks!\"}",
"{\"title\": \"RE: The proposed RL agent...\", \"comment\": \"Thank you for raising these concerns. We have attempted to address them in this revision.\\n\\n\\u201cThe authors do not provide enough details about some \\\"informal\\\" experiments...\\u201d\\n\\nWe have now significantly revised our LSTM training analysis to include a more detailed study that shows representation drift measured by both parameter lag (number of updates since experience was generated) and the q-value discrepancy, and for the same runs the mean episodic return, some of which is contained in the appendix. Additionally, we now have results as we vary burn-in from zero to 20 and up to 40 steps. We think this improves the section quite a bit, but we are still looking at edits to the paper to improve clarity further.\\n\\n\\u201cBeyond this point, the paper is generally hard to follow and reorganizing some sections (e.g., sec. 2.3 should appear after sec. 3 as it contains a lot of technical details)...\\u201d\\n\\nWe have moved one of these sections to the appendix and tried to improve the flow of the paper. Please let us know if this makes for a clearer read.\\n\\n\\u201cHausknecht & Stone (2015) have proposed two training strategies (zero-state and Replaying whole episode trajectories see sec. 3 page 3). The authors should clarify\\u2026\\u201d\\n\\nHausknecht and Stone (2015) argued that the two strategies performed similarly in their experiments. We agree that a more thorough investigation of whole-trajectory-based training would be valuable, with special attention to sample correlation, variance and optimization settings. However, this seems to exceed the scope of the paper.\\n\\n\\u201cThe authors present results (mainly, fig. 2 and fig. 3) suggesting that the proposed R2D2 agent outperform the agents Ape-X and IMPALA, where R2D2 is trained using the aforementioned stored-state with burn-in strategy. It is not clear\\u2026\\u201d\\n\\nIn all R2D2 experiments outside of the initial analysis section (where we specifically study these methods) we used the stored-state with 40-step burn-in method. We have attempted to make this more explicit in the revision. Ape-X does not use an RNN, and IMPALA is perhaps most like the \\\"entire episode trajectories\\\" approach in Hausknecht and Stone, but due to using the mostly on-policy actor-critic is hard to compare directly. To avoid any confusion, the reported Ape-X and IMPALA results are not our own reruns of the algorithms, but taken from their respective publications or private communication with the authors.\\n\\n\\u201cThe authors compare the different strategies only in terms of their proposed Q-value discrepancy metric. It could be interesting to consider other metrics in order to evaluate the ability of the methods on common aspects.\\u201d\\n\\nAs mentioned above, we have attempted to significantly improve this by including more information.\"}",
"{\"title\": \"RE: Recurrent NNs in distributed RL...\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n\\u201c... only major comments are that I\\u2019m a bit skeptical about the lack of a more thorough (theoretical) analysis supporting their empirical findings (what gives me food for thought is that LSTM helps that much on even fully observable games such as Ms. Pacman);\\u201d\\n\\nWe do not have theoretical contributions to add in our rebuttal, but would like to offer some additional resources for understanding the empirical findings. In addition to the more thorough reporting of results now included, as well as now including 3 seeds in all R2D2-based experiments, we have uploaded videos of the agent\\u2019s learned policy on a handful of Atari games. What we observe in these videos is that R2D2 has learned to leverage memory in Atari in unexpected ways. That is, Atari *does* directly benefit from long-term memory in several specific cases. For example, in MsPacman the agent learns to precisely time the ghosts\\u2019 vulnerability in order to obtain well timed multi-ghost-meals, yielding much higher scores.\\n\\nPlease note these videos are uploaded anonymously to youtube and marked unlisted, which should prevent us from inferring any geographic information about reviewers who view them.\\n\\nMsPacman\", \"r2d2\": \"https://youtu.be/UUn_vXj89Ps\", \"r2d2_feed_forward\": \"https://youtu.be/UUn_vXj89Ps\\n\\n\\n\\u201cand the usual caveats regarding evaluation\\u2026\\u201d\\n\\nTo this end we have run additional ablations on our architectural choices and are in the process of rerunning IMPALA on the same hardware and training regime as we used for R2D2. We have also attempted to clarify our exact evaluation regime by pointing out the 30-minute episode timeout on Atari, the fact that our results are final scores (not max-over-training as have been reported for e.g. Ape-X), and other evaluation details.\"}",
"{\"title\": \"RE: A thorough investigation...\", \"comment\": \"Thank you for your comments and in particular your concerns around the use of importance weighting. We took your concerns to heart and have (as we discuss below) included it and rerun the experiments.\\n\\n\\u201cThe fact that the same network architecture and hyper-parameters also work pretty well on DMLab is encouraging w.r.t. the generality of the method.\\u201d\\n\\nWe want to thank the reviewer for making note of this aspect. It is something we consider particularly noteworthy considering common problems with robustness in deep RL.\\n\\n\\u201c\\u2026 a couple of important concerns though. The first one is that a few potentially important changes were made to the \\u201ctraditional\\u201d settings typically used on Atari, which makes it difficult to perform a fair comparison to previously published results.\\u201d\\n\\nThis is a very reasonable concern and we have run a more thorough set of ablations on R2D2 which have now been included in the latest revision. These are not 100% completed yet, but are far enough along to give a clear picture. Specifically, we are taking your recommendations and comparing R2D2 with (1) Feed-forward only (already included, but now also done over all 57 Atari games), (2) Reward clipping but no value function rescaling, (3) Smaller discount (gamma = 0.99), and (4) end-episode on life-loss enabled. Only the last one is not included in the current revision, but will be included before revisions close.\\n\\n\\u201cThe second important issue I see is that the authors do not seem to plan to share their code to reproduce their results. Given how time consuming and costly it is to run such experiments, and all potentially tricky implementation details (especially when dealing with recurrent networks), making this code available would be tremendously helpful to the research community (particularly since this paper claims a new SOTA on Atari)... I strongly hope the authors will consider it.\\u201d\\n\\nDistributed training, in particular, is an area where publically available open source code goes a long way towards reproducibility and progress in the field. Although we are not able to immediately release the code, we believe that we will be able to make the source available in the future.\\n\\n\\n\\u201c1. \\u201cWe also found no benefit from using the importance weighting that has been typically applied with prioritized replay\\u201d: this is potentially surprising since this could be \\u201cwrong\\u201d, mathematically speaking. Do you think this is because of the lack of stochasticity in the environments? (I know Atari is deterministic, but I am not sure about DMLab)\\u201d\\n\\nThank you for pointing this out! This does make sense and we agree that in principle the lack of importance weighting when using prioritized replay is not well supported. We have now included it in the algorithm and rerun almost all of our experiments (ablations are still in progress). We have not found it necessary to retune hyper-parameters to support this change, and in fact found that re-introducing importance weighting did stabilise training on some of the DMLab levels and slightly improved overall performance.\\n\\n\\u201c2. Fig. 3 (left) shows R2D2 struggling on some DMLab tasks. Do you have any idea why?\\u201c\\n\\nWe do not use the asymmetric reward clipping of IMPALA and believe that this clipping is most helpful in some of the language tasks. We are in the process of running a very small test in which we add the asymmetric clipping to verify, but do not plan to add it to the algorithm itself in the interest of generality.\\n\\nAdditionally, one of the larger benefits of IMPALA PBT is the use of population-based training, and we suspect this is another reason for IMPALA occasionally out-performing R2D2.\\n\\n\\u201c3. In Table 1, where do the IMPALA (PBT) numbers on DMLab come from?\\u201d\", \"we_obtained_the_impala_results_data_from_the_authors_of_the_cited_paper\": \"\\u201cMulti-task Deep Reinforcement Learning with PopArt\\u201d. However, after further discussion we believe the best approach would be to rerun IMPALA on our same hardware and training regime. Until this completes we will use the provided data, but again, we hope to replace this before the final revision.\\n\\n\\u201cAnd finally a few minor comments / suggestions:\\u201d\\n\\nThank you for these comments and suggestions, we have made the corresponding edits to clarify things and fix mistakes. We should also mention that while doing this we fixed a bug that limited Atari training episode times to 14 minutes (50K frames) instead of 30 minutes (108K frames), this slightly improves some of our Atari results.\"}",
"{\"title\": \"All Reviewers\", \"comment\": \"We first want to thank all the reviewers and commenters for their close reading and constructive feedback. We have revised or expanded most of our experiments and attempted to clarify the text in various locations. Additionally, as requested we are including (in the appendix) a figure comparing the sample efficiency of R2D2 with Rainbow, Ape-X, and Reactor on Atari. As detailed in our individual responses we have extended the ablations and rerun our experiments to address some concerns. This resulted in slightly improved performance, which is now averaged over three seeds.\\n\\nWe intend to add further ablation results (on life-loss signal) and our own rerun of IMPALA with matched action-set before the revision period ends.\"}",
"{\"title\": \"Rrecurrent NNs in distributted RL settings as a clear improvement of the feed-forward NN variations in partially observed environments\", \"review\": \"This paper investigates the use of recurrent NNs in distributted RL settings as a clear improvement of the feed-forward NN variations in partially observed environments. The authors present \\\"R2DR\\\" algorithm as a A+B approach from previous works (actually, R2D2 is an Ape-X-like agent using LTSM), as well as an empirical study of a number of ways for training RNN from replay in terms of the effects of parameter lag (and potential alleviating actions) and sample-afficiency. The results presented show impressive performance in Atari-57 and DMLab-30 benchmarks.\\n\\nIn summary, this is a very nice paper in which the authors attack a challenging task and empirically confirm that RNN agents generalise far better when scaling up through parallelisation and distributed training allows them to benefit from huge experience. The results obtained in ALE and DMLab improves significantly upon the SOTA works, showing that the trend-line in those benchmarks seem to have been broken. \\n\\nFurthermore, the paper presents their approach/analyses in a well-structured manner and sufficient clarity to retrace the essential contribution. The background and results are well-contextualised with relevant related work. \\n\\nMy only major comments are that I\\u2019m a bit skeptical about the lack of a more thorough (theoretical) analysis supporting their empirical findings (what gives me food for thought is that LSTM helps that much on even fully observable games such as Ms. Pacman); and the usual caveats regarding evaluation: evaluation conditions aren't well standardized so the different systems (Ape-X, IMPALA, Reactor, Rainbow, AC3 Gorilla, C51, etc.) aren't all comparable. These sort of papers would benefit from a more formal/comprehensive evaluation by means of an explicit enumeration of all the dimensions relevant for their analysis: the data, the knowledge, the software, the hardware, manipulation, computation and, of course, performance, etc. However only some of then are (partially) provided.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The proposed RL agent leads to interesting results but the technical details need to be clarified\", \"review\": \"Summary:\\nLeveraging on recent advances on distributed training of RL agents, the paper proposes the analysis of RNN-based RL agents with experience replay (i.e., integrating the time dependencies through RNN). Precisely, the authors empirically compare a state-of-the-art training strategy (called zero start state) with three proposed training strategies (namely; zero-state with burn-in, stored-state and stored-state with burn-in). By comparing these different strategies through a proposed metric (Q-value discrepancy), the authors conclude on the effectiveness of the stored-state with burn-in strategy which they consider for the training of their proposed Recurrent Replay Distributed DQN (R2D2) agent. \\n\\nThe proposed analysis is well-motivated and has lead to significant results w.r.t. the state-of-the-art performances of RL agents.\", \"major_concerns\": [\"My major concerns are three-fold:\", \"The authors do not provide enough details about some \\\"informal\\\" experiments which are sometimes important to convince the reader about the relevance of the suggested insights (e.g., line 3 page 5). Beyond this point, the paper is generally hard to follow and reorganizing some sections (e.g., sec. 2.3 should appear after sec. 3 as it contains a lot of technical details) would certainly make the reading of the paper easier.\", \"Hausknecht & Stone (2015) have proposed two training strategies (zero-state and Replaying whole episode trajectories see sec. 3 page 3). The authors should clarify why they did not considered the other states in their study.\", \"The authors present results (mainly, fig. 2 and fig. 3) suggesting that the proposed R2D2 agent outperform the agents Ape-X and IMPALA, where R2D2 is trained using the aforementioned stored-state with burn-in strategy. It is not clear which are the considered training strategies adopted for the (compared to) state-of-the-art agents (Ape-X and IMPALA). The authors should clarify more precisely this point.\"], \"minor_concerns\": [\"The authors compare the different strategies only in terms of their proposed Q-value discrepancy metric. It could be interesting to consider other metrics in order to evaluate the ability of the methods on common aspects.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A thorough investigation of using recurrent networks with experience replay, with impressive results on Atari\", \"review\": \"In this submission, the authors investigate using recurrent networks in distributed DQN with prioritized experience replay on the Atari and DMLab benchmarks. They experiment with several strategies to initialize the recurrent state when processing a sub-sequence sampled from the replay buffer: the best one consists in re-using the initial state computed when the sequence was originally played (even if it may now be outdated) but not doing any network update during the first k steps of the sequence (\\u201cburn-in\\u201d period). Using this scheme with LSTM units on top of traditional convolutional layers, along with a discount factor gamma = 0.997, leads to a significant improvement on Atari over the previous state-of-the-art, and competitive performance on DMLab.\\n\\nThe proposed technique (dubbed R2D2) is not particularly original (it is essentially \\u201cjust\\u201d using RNNs in Ape-X), but experiments are thorough, investigating several important aspects related to recurrence and memory to validate the approach. These findings are definitely quite relevant to anyone using recurrent networks on RL tasks. The results on Atari are particularly impressive and should be of high interest to researchers working on this benchmark. The fact that the same network architecture and hyper-parameters also work pretty well on DMLab is encouraging w.r.t. the generality of the method.\\n\\nI do have a couple of important concerns though. The first one is that a few potentially important changes were made to the \\u201ctraditional\\u201d settings typically used on Atari, which makes it difficult to perform a fair comparison to previously published results. Using gamma = 0.997 could by itself provide a significant boost, as hinted by results from \\u201cMeta-Gradient Reinforcement Learning\\u201d (where increasing gamma improved results significantly compared to the usual 0.99). Other potentially impactful changes are the absence of reward clipping (replaced with a rescaling scheme) and episodes not ending with life loss: I am not sure whether these make the task easier or harder, but they certainly change it to some extent (the \\u201cdespite this\\u201d above 5.1 suggests it would be harder, but this is not shown empirically). Fortunately, this concern is partially alleviated by Section 6 that shows feedforward networks do not perform as well as recurrent ones, but this is only verified on 5 games: a full benchmark comparison would have been more reassuring (as well as running R2D2 with more \\u201cstandard\\u201d Atari settings, even if it would mean using different hyper-parameters on DMLab).\\n\\nThe second important issue I see is that the authors do not seem to plan to share their code to reproduce their results. Given how time consuming and costly it is to run such experiments, and all potentially tricky implementation details (especially when dealing with recurrent networks), making this code available would be tremendously helpful to the research community (particularly since this paper claims a new SOTA on Atari). I am not giving too much weight to this issue in my review score since (unfortunately) the ICLR reviewer guidelines do not explicitly mention code sharing as a criterion, but I strongly hope the authors will consider it.\\n\\nBesides the above, I have a few additional small questions:\\n1. \\u201cWe also found no benefit from using the importance weighting that has been typically applied with prioritized replay\\u201d: this is potentially surprising since this could be \\u201cwrong\\u201d, mathematically speaking. Do you think this is because of the lack of stochasticity in the environments? (I know Atari is deterministic, but I am not sure about DMLab)\\n2. Fig. 3 (left) shows R2D2 struggling on some DMLab tasks. Do you have any idea why? The caption of Table 3 in the Appendix suggests the absence of specific reward clipping may be an issue for some tasks, but have you tried adding it back? I also wonder if maybe training a unique network per task may make DMLab harder, since IMPALA has shown some transfer learning occurring between DMLab tasks? (although the comparison might be to the \\u201cdeep experts\\u201d version of IMPALA \\u2014 this is not clear in Fig. 3 \\u2014 in which case this last question would be irrelevant)\\n3. In Table 1, where do the IMPALA (PBT) numbers on DMLab come from? Looking at the current arxiv version of their paper, their Fig. 4 shows it goes above 70% in mean capped score, while your Table 1 reports only 61.5%. I also can\\u2019t find a median score being reported on DMLab in their paper, did you try to compute it from their Fig. 9? And why don\\u2019t you report their results on Atari?\\n4. Table 4\\u2019s caption mentions \\u201c30 no-op starts\\u201d but you actually used the standard \\u201crandom starts\\u201d setting, right? (not a fixed number of 30 no-ops)\\n\\nAnd finally a few minor comments / suggestions:\\n- In the equation at bottom of p. 2, it seems like theta and theta- (the target network) have been accidentally swapped (at least compared to the traditional double DQN formula)\\n- At top of p. 3 I guess \\\\bar{delta}_i is the mean of the delta_i\\u2019s, but then the index i should be removed\\n- In Fig. 1 (left) please clarify which training phase these stats are computed on (whole training? beginning / middle / end?)\\n- p. 4, \\u201cthe true stored recurrent states at each step\\u201d: \\u201ctrue\\u201d is a bit misleading here as it can be interpreted as \\u201cthe states one would obtain by re-processing the whole episode from scratch with the current network\\u201d => I would suggest to remove it, or to change it (e.g. \\u201cpreviously\\u201d). By the way, I think it would have been interesting to also compare to these states recomputed \\u201cfrom scratch\\u201d, since they are the actual ground truth.\\n- I think you should mention in Table 1\\u2019s caption that the PBT IMPALA is a single network trained to solve all tasks\\n- Typo at bottom of p. 7, \\u201cIndeed, Table 1 that even...\\u201d\", \"update\": \"score updated to 8 (from 7) following discussion below\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Thanks for the clarification on frames + no-op/human starts! I look forward to the Rainbow comparison, as well. P.S. I assume you meant median=434.1, mean=1659.6 :)\", \"title\": \"Thanks for clarifying\"}",
"{\"title\": \"Re: Parameter notation\", \"comment\": \"Thank you for your interest in our paper, and thank you for catching this bug, we will fix this in a future revision.\"}",
"{\"title\": \"Our contributions\", \"comment\": \"Thank you for your interest in our paper!\\n\\nIn fact, our work started out as an investigation of a novel algorithmic technique. \\nWe developed an Ape-X-like agent enhanced with an LSTM as a platform for this research. \\n\\nThe empirical improvement over previous agents was sufficiently surprising that we considered it worth further study, isolated from confounding factors from unrelated techniques. Towards the aim of understanding R2D2's empirical performance, we contributed a study of the effects of parameter lag on representation drift and recurrent state staleness, potential mitigation strategies, and the memory-dependence in a learned recurrent policy.\\n\\nThis paper focuses on presenting and analyzing effective ways of training a recurrent memory from replay, resulting in a more universal agent achieving SOTA results across multiple environment suites with a single set of parameters. While more analysis remains to be done to fully understand the role of memory in RL, we believe the above contributions provide valuable insights to the RL community.\"}",
"{\"comment\": \"I think you have a mixup in the target/online parameter notation at the bottom of page 2. The argmax should be computed using the online parameters and the target value computed using the target parameters.\", \"title\": \"Parameter notation mixup\"}",
"{\"title\": \"Re: Questions on evaluation conditions and sample efficiency\", \"comment\": \"1. Thank you for catching this, we inadvertently included the human-start values for Ape-X instead of the no-op starts evaluation results. All mean/median values throughout the paper are referring to 30 no-op evaluation conditions. The correct values for Ape-X should be mean=434.1%, median=1695.6%. The other values in the table correctly refer to 30 no-op results for the other agents.\\n\\n2. On Atari, the R2D2 agent consumes approximately 5.7B environment frames per day (with 256 actors, each running at approximately 260 fps), i.e. approximately 17B for the reported 3-day version. On DMLab, the frame rate is around half of the Atari frame rate, ~2.65B env frames per day, i.e. ~10.6B frames for the entire 4-day run whose results we report. Regarding a comparison with Rainbow in terms of environment frames, we agree this is informative and will include such a comparison when revisions open up.\"}",
"{\"comment\": \"The paper proposes to replace the feed-forward NN with recurrent version in Distributed RL setting, solving few technical issues (such as how to properly initialize the RNN), and shows improved performance in Atari-57 and DMLab-30 domain.\\n\\nWhile the experimental results are impressive, the contribution of the paper is rather straightforward application of the RNN. I'd consider it only a minor improvement over the state-of-the art and would like to see either\\na) more thorough analysis why it improves performance in almost-observable MDPs, or\\nb) more fundamental additional contribution\\nto consider it for publication.\", \"title\": \"Minor contribution\"}",
"{\"comment\": \"Congratulations on these very impressive results! I have a few comments and questions on evaluation conditions and sample efficiency:\\n\\n1. Regarding no-op vs. human starts: Table 1 seems to use median scores from Ape-X in the human start condition (358.1%, vs. the 358% listed in Table 1 of the Ape-X paper). But only no-op scores are listed in the Appendix of this paper, and it also seems plausible that you are referring to the 20 hr no-op score of Ape-X. Could you clarify what evaluation condition you're using in this paper, and which condition you are comparing against in Ape-X?\\n\\n2.Could you clarify how many frames are used per game by R2D2? A naive conversion from Ape-X to R2D2, using Table 1 from the Ape-X paper, accounting for the reduced wall time (3 vs 5 days) and number of actors (360 vs. 256) seems to suggest roughly 10 billion frames per game, but I'm not sure if I'm missing something that would affect these calculations. Also, it would be interesting to know just how severe the remaining sample efficiency gap mentioned in the conclusion is - median scores for Rainbow vs. R2D2 with comparable frames might be illustrative, for example.\", \"title\": \"Questions on evaluation conditions and sample efficiency\"}"
]
} |
|
rJlk6iRqKX | Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach | [
"Minhao Cheng",
"Thong Le",
"Pin-Yu Chen",
"Huan Zhang",
"JinFeng Yi",
"Cho-Jui Hsieh"
] | We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. This is a very challenging problem since the direct extension of state-of-the-art white-box attacks (e.g., C&W or PGD) to the hard-label black-box setting will require minimizing a non-continuous step function, which is combinatorial and cannot be solved by a gradient-based optimizer. The only two current approaches are based on random walk on the boundary (Brendel et al., 2017) and random trials to evaluate the loss function (Ilyas et al., 2018), which require lots of queries and lacks convergence guarantees.
We propose a novel way to formulate the hard-label black-box attack as a real-valued optimization problem which is usually continuous and can be solved by any zeroth order optimization algorithm. For example, using the Randomized Gradient-Free method (Nesterov & Spokoiny, 2017), we are able to bound the number of iterations needed for our algorithm to achieve stationary points under mild assumptions. We demonstrate that our proposed method outperforms the previous stochastic approaches to attacking convolutional neural networks on MNIST, CIFAR, and ImageNet datasets. More interestingly, we show that the proposed algorithm can also be used to attack other discrete and non-continuous machine learning models, such as Gradient Boosting Decision Trees (GBDT). | [
"Adversarial example",
"Hard-label",
"Black-box attack",
"Query-efficient"
] | https://openreview.net/pdf?id=rJlk6iRqKX | https://openreview.net/forum?id=rJlk6iRqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xZLc5HgV",
"rkltT1X9pX",
"BkebK1X967",
"Skx3G1mqpQ",
"HJgnVJuGaQ",
"Hyxsb3Ha2m",
"SJefbO63hQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545083464674,
1542234049105,
1542233977081,
1542233876201,
1541730100337,
1541393410811,
1541359610270
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper763/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper763/Authors"
],
[
"ICLR.cc/2019/Conference/Paper763/Authors"
],
[
"ICLR.cc/2019/Conference/Paper763/Authors"
],
[
"ICLR.cc/2019/Conference/Paper763/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper763/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper763/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers liked the clarity of the material and agreed the experimental study is convincing. Accept.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, accept\"}",
"{\"title\": \"Thanks for the review and we have answered your questions as follows\", \"comment\": \"1. Without additional assumptions, we couldn\\u2019t prove g(theta) is continuous for general deep neural networks. It\\u2019s true that the g(theta) may not be continuous; for example, we think it might be possible to construct some counter-examples using ReLU activation. However, although the assumption may not hold for DNN or GBDT globally, our algorithm still performs well in practice. Moreover, we have illustrated that the decision boundaries of DNN and GBDT are mostly smooth (Fig2,3) in some examples. What we can assure is that if $g(\\\\cdot)$ has Lipschitz continuous gradient, our algorithm has such a theoretical guarantee. Moreover, based on the same analysis in section 7 of [21], the condition can be relaxed to Lipschitz continuous. This is indeed a sufficient but not necessary condition.\\n\\n2. We use random directions instead of any format of attack to generate adversarial examples. More specifically, we generate i.i.d. random directions \\\\theta_1, \\u2026 \\\\theta_n from Gaussian, and for each of them check whether it\\u2019s successful or not (successful if $f(x_i+\\\\theta)\\\\neq y_i$). We have added more details in the revised paper.\\n\\n3. We have provided the tree models description in 4.1.3.\\n\\nWe don\\u2019t really know any tree-based model that can achieve similar performance with CNN on ImageNet, but GBDT is still useful for many real-world data science applications (e.g., it\\u2019s the most common model for click-through rate predictions). We would like to stress that it\\u2019s not our focus to discuss whether GBDT is useful or not. The aim of attacking GBDT is to prove that our algorithm can also be used to attack other discrete and non-continuous machine learning models, which couldn\\u2019t be done by current gradient-based attack methods. \\n\\n4. About the approximation of L-inf norm: yes, we could directly apply opt-attack on L-inf norm without any modification. However, we find it harder to optimize in practice because of the additional max term. With the approximation, the function of g is more smooth than previous, which leads to faster convergence.\"}",
"{\"title\": \"Thanks for the positive review and we have some clarifications\", \"comment\": \"Thanks for the positive reviews. We agree that white-box setting could be a better way to evaluate the model\\u2019s robustness. However, if an attacker wants to attack a system in the real world, it\\u2019s usually in the black-box setting. In practice, commercial systems like Google Cloud vision only output the top-1 or top-k predictions to users, which is the same with our hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. We agree that 20k queries are still too much and how to reduce the number of queries is still an open and challenging problem.\"}",
"{\"title\": \"Thanks for the great suggestions and we have revised our paper as follows:\", \"comment\": \"Thanks for the positive reviews and valuable suggestions.\\n\\n1. We have modified equation (4-5) to min_{\\\\lambda>0} {\\\\lambda} s.t. f(x_0+\\\\lambda \\\\theta/||\\\\theta||) != y_0 as you suggested.\\n\\n2. Section 3.1: Yes, you are right. We have followed your suggestion by changing \\u201cfine-grained\\u201d to \\u201ccoarse-grained\\u201d in the revision. \\n\\n3. Algorithm 2: \\nA. We have included all implementation details above section 4 following your suggestion in revision.\\nB. We have added a new table to show how the performance varies with the number of sampled directions per step u_t in Appendix 6.2.\\n\\n4. The performance in CIFAR10: During the experiment, we found that CIFAR is more sensitive to the initial direction in our method. Although we find a relatively small distortion direction at first, the method sometimes converges to a worse point than Boundary-attack. It could be solved by selecting several directions as initialization and do Algorithm 2 several times. \\n\\n5. The big O notation in proof: We have followed your suggestion to replace $\\\\epsilon\\\\simO()$ to $\\\\epsilon=O()$ and delete the big O notation with \\\\beta.\"}",
"{\"title\": \"Nice idea with solid experiments\", \"review\": \"In this paper the authors propose optimizing for adversarial examples against black box models by considering minimizing the distance to the decision boundary. They show that because this gives real valued feedback, the optimizer is able to find closer adversarial examples with fewer queries. This would be heavily dependent on the model structure (with more complex decision boundaries likely being harder to optimize) but they show empirically in 4 models that this method works well.\\n\\nI am not convinced that the black box model setting is the most relevant (and 20k queries is still a fair bit), but this is important research nonetheless. I generally found the writing to be clear and the idea to be elegant; I think readers will value this paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"This paper proposed a reformulation of objective function to solve the hard-label black-box attack problem. The idea is interesting and the performance of the proposed method seem to be capable of finding adversarial examples with smaller distortions and less queries compared with other hard-label attack algorithms.\\n\\nThis paper is well-written and clear.\\n\\n==============================================================================================\\nQuestions\\n\\nA. Can it be proved the g(theta) is continuous? Also, the theoretical analysis assume the property of Lipschitz-smooth and thus obtain the complexity of number of queries. Does this assumption truly hold for g(theta), when f is a neural network classifiers? If so, how to obtain the Lipschitz constant of g that is used in the analysis sec 6.3? \\n\\nB. What is the random distortion in Table 1? What initialization technique is used for the query direction in the experiments? \\n\\nC. The GBDT result on MNIST dataset is interesting. The authors should provide tree models description in 4.1.3. However, on larger dataset, say imagenet, are the tree models performance truly comparable to ImageNet? If the test accuracy is low, then it seems less meaningful to compare the adversarial distortion with that of imagenet neural network classifiers. Please explain. \\n\\nD. For sec 3.2, it is not clear why the approximation is needed. Because the gradient of g with respect to theta is using equation (7) and theta is already given (from sampling); thus the Linf norm of theta is a constant. Why do we need the approximation? Given that, will there be any problems on the L2 norm case?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"well-written paper with good empirical results\", \"review\": \"This paper addresses black-box classifier attacks in the \\u201chard-label\\u201d setting, meaning that the only information the attacker has access to is single top-1 label predictions. Relative to even the standard black-box setting where the attacker has access to the per-class logits or probabilities, this setting is difficult as it makes the optimization landscape non-smooth. The proposed approach reformulates the optimization problem such that the outer-loop optimizes the direction using approximate gradients, and the inner-loop estimates the distance to the nearest attack in a given direction. The results show that the proposed approach successfully finds both untargeted and targeted adversarial examples for classifiers of various image datasets (including ImageNet), usually with substantially better query-efficiency and better final results (lower distance and/or higher success rate) than competing methods.\\n\\n=====================================\", \"pros\": \"Very well-written and readable paper with good background and context for those (like me) who don\\u2019t closely follow the literature on adversarial attacks. Figs. 1-3 are nice visual aids for understanding the problem and optimization landscape.\\n\\nNovel formulation and approach that appears to be well-motivated from the literature on randomized gradient-free search methods. Novel theoretical analysis in Appendix that generalizes prior work to approximations (although, see notes below).\\n\\nGood empirical results showing that the method is capable of query-efficiently finding attacks of classifiers on real-world datasets including ImageNet. Also shows that the model needn\\u2019t be differentiable to be subject to such attacks by demonstrating the approach on a decision-tree based classifier. Appears to compare to and usually outperform appropriate baselines from prior work (though I\\u2019m not very familiar with the literature here).\\n\\n=====================================\\n\\nCons/questions/suggestions/nitpicks:\", \"eq_4_5\": \"Though I understood the intention, I think the equations are incorrect as written: argmin_{\\\\lambda} { F(\\\\lambda) } of a binary-valued function F would produce the set of all \\\\lambdas that make F=0, rather than the smallest \\\\lambda that makes F=1. I think it should be something like:\\n\\nmin_{\\\\lambda>0} {\\\\lambda}\\ns.t. f(x_0+\\\\lambda \\\\theta/||\\\\theta||) != y_0\\n\\nSec 3.1: why call the first search \\u201cfine-grained\\u201d? Isn\\u2019t the binary search more fine-grained? I\\u2019d suggest changing it to \\u201ccoarse-grained\\u201d unless I\\u2019m misunderstanding something.\", \"algorithm_2\": \"it could be interesting to show how performance varies with number of sampled directions per step u_t.\", \"sec\": \"4.1.2: why might your algorithm perform worse than boundary-attack on targeted attacks for CIFAR classifiers? Would like to have seen at least a hypothesis on this.\\n\\nSec 6.3 Theorem 1: I think the theorem statement is a bit imprecise. There is an abuse of big-O notation here -- O(f(n)) is a set, not a quantity, so statements such as \\\\epsilon ~ O(...) and \\\\beta <= O(...) and \\u201cat most O(...)\\u201d are not well-defined (though common in informal settings) and the latter two are redundant given the meaning of O as an upper bound. The original theorem from [Nesterov & Spokoiny 2017] that this Theorem 1 would generalize doesn\\u2019t rely on big-O notation -- I think following the same conventions here might improve the theorem and proof.\\n\\n=====================================\\n\\nOverall, this is a good paper with nice exposition, addressing a challenging but practically useful problem setting and proposing a novel and well-motivated approach with strong empirical results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkeJ6iR9Km | Variational Sparse Coding | [
"Francesco Tonolini",
"Bjorn Sand Jensen",
"Roderick Murray-Smith"
] | Variational auto-encoders
(VAEs) offer a tractable approach when performing approximate inference in otherwise intractable generative models. However, standard VAEs often produce latent codes that are disperse and lack interpretability, thus making the resulting representations unsuitable for auxiliary tasks (e.g. classification) and human interpretation. We address these issues by merging ideas from variational auto-encoders and sparse coding, and propose to explicitly model sparsity in the latent space of a VAE with a Spike and Slab prior distribution. We derive the evidence lower bound using a discrete mixture recognition function thereby making approximate posterior inference as computational efficient as in the standard VAE case. With the new approach, we are able to infer truly sparse representations with generally intractable non-linear probabilistic models. We show that these sparse representations are advantageous over standard VAE representations on two benchmark classification tasks (MNIST and Fashion-MNIST) by demonstrating improved classification accuracy and significantly increased robustness to the number of latent dimensions. Furthermore, we demonstrate qualitatively that the sparse elements capture subjectively understandable sources of variation. | [
"Variational Auto-Encoders",
"Sparse Coding",
"Variational Inference"
] | https://openreview.net/pdf?id=SkeJ6iR9Km | https://openreview.net/forum?id=SkeJ6iR9Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgykebvBN",
"Bklk8fNZfN",
"HJl5uwVDeV",
"rJesxiCnam",
"Skx0DAtK6m",
"HJeE9pYK6Q",
"Syxnw6tFTQ",
"rkxTh3YFpX",
"rkeBK2FKaQ",
"rkl2hXp627",
"SJlf7Bfanm",
"Syedv6PB3X",
"B1lX0JeT57",
"HylDUcPiq7",
"S1lEYOzrq7"
],
"note_type": [
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"comment"
],
"note_created": [
1550417878888,
1546891847159,
1545189234360,
1542413042619,
1542196838239,
1542196619983,
1542196580000,
1542196405494,
1542196349088,
1541424052105,
1541379354508,
1540877663530,
1539272650694,
1539172942792,
1538758779637
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"~Alfredo_De_la_Fuente1"
],
[
"ICLR.cc/2019/Conference/Paper762/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"ICLR.cc/2019/Conference/Paper762/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper762/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper762/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper762/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"A clear and complete reproducibility report\", \"comment\": \"We thank the authors of the reproducibility study for their efforts to reproduce our results and test our findings. The report is thorough, clear and complete. We are glad to see that our main conclusions were confirmed by this study and appreciate the sensible suggestions, which we are very much taking into account to extend and improve our work.\", \"regarding_some_of_the_minor_missing_information_that_were_pointed_out_in_the_report\": [\"The batch size used in the experiment reported in the paper was 50 for the MNIST and Fashion-MNIST experiment and 100 for the CelebA experiments. We are happy to see that this choice was not critical, as the reproducibility study was carried out using batches of 32 samples instead.\", \"The multiplicative weights were initialised as normal random variables with variance 1, while additive weights for each layer were initialised as zeros.\", \"We thank the authors of the study for elaborating on the resources cost of the technique, which we did not investigate in our work. We carried out our experiments with a Titan X GPU and the running times seem to correspond to our experience.\", \"Regarding the suggestion made, these are all relevant and we agree with them to a large extent. More specifically:\", \"As suggested, In our future extension of the work we will ensure that every considered model stably reaches a local minimum.\", \"If we again demonstrate our methods solely with images, we will improve the decoding architectures to more specifically adapt to image statistics, using convolutional layers and conditional pixel structures, such as those used in PixelVAEs. Alternatively, we will broden the scope of our work to make it more general, testing also with other data, such as sound, speech, pose estimation or text.\", \"We will definitely test the method with a feature disentanglement benchmark dataset. We believe the main advantage of VSC to be the unsupervised discovery of features, therefore a quantitative evaluation of this capability is indeed very relevant.\"]}",
"{\"comment\": \"As part of the ICLR 2019 Reproducibility Challenge, we worked to reproduce the results reported in this paper (Variational Sparse Coding). Given no available code for the project, we implemented the variational autoencoder architecture described in the paper from scratch. We validate the experimental results and further propose improvements or future research directions that may contribute to the machine learning community. A link to the full report, as well as the repository with code, can be found at the end of this message.\\n\\nThe authors' main motivation for this work lies in developing a model able to learn sparse representations that are informative (for further classification tasks) and interpretable (by exploring the latent sparse space). In this line of thought, they propose an improvement over the Variational AutoEncoder model, explicitly modeling sparsity in the latent space with a Spike and Slab prior distribution and drawing ideas from sparse coding theory.\\n\\nOverall, the paper describes in enough detail the Variational Sparse Coding (VSC) model implementation. Only some details in the optimization hyperparameters and initialization mechanisms would be ideal. However, given enough training epochs, the VSC model we implemented was able to converge and produce the desired results in the different tasks described in the paper. \\n\\nFurther development on testing the model and comparing it against other sparse models on well-known benchmarks is critical. In order to assess how interpretable the learned latent features are, we could extract ideas from disentangled representations, to measure the effect of sparsity on a disentanglement metric, against benchmark models such as \\u03b2-VAE or Factor-VAE. Without a proper benchmark, we are not able to understand how the learned representations can be interpretable. Although for small latent dimensions, visual inspection is enough, there must be a metric for comparison. \\n\\nFinally, we suggest using convolutional architectures to obtain less blurry images and richer sparse representations. In addition, by applying the model to the Disentanglement testing Sprites dataset, we may observe and measure the interpretability of the learned latent sparse representations. \\n\\nWe conclude that the model hypothesis was validated through corroborating results in different experimental setups with our model implementation. Therefore, the research paper is reproducible.\", \"full_report\": \"https://drive.google.com/open?id=1sEmiD2_dOwTJVydIsiZ1bsxdtbaBQT_Z\", \"code\": \"https://github.com/Alfo5123/Variational-Sparse-Coding\", \"title\": \"Reproducibility study of Variational Sparse Coding paper\"}",
"{\"metareview\": \"The paper develops and investigates the use of a spike-and-slab prior and approximate posterior for a VAE. It uses a continuous relaxation for the discrete binary component in the reconstruction term of the ELBO, and an analytic expression for the KL term between the spike-and-slab prior and approximate posterior. Experiments on MNIS, Fashion-MNIST and CelebA convincingly show that the approach works to learn sparse representations with improved interpretability that also yield more robust classification\\n\\nAll reviewers agreed that this approach to sparsity in VAEs is well motivated and sound, that the paper is well written and clear, and the experiments interesting.\\nOne reviewer noted that the accuracy on MNIST remains really poor, so the approach does not cure VAEs yielding subpar representations for classification (although not the goal of this research).\\n\\nThe reviewers and the AC however all judged that it currently constitutes a too limited contribution because a) the approach is a straightforward application of vanilla VAEs with a different prior/posterior, and is thus rather incremental. b) the scope of the paper is rather limited, in particular as it does not sufficiently discuss and does not empirically compare with other (VAE-related) approaches from the literature that were developed for sparse latent representations.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Clean straight-forward derivation of VAE with spike-and-slab prior. Well-written but incremental and too limited in scope.\"}",
"{\"title\": \"Revision\", \"comment\": [\"We have now uploaded a revised version of our paper. We have made the following changes:\", \"*Additional Related Work Section* We have added a subsection (2.3) covering related work on discrete VAEs and sparsity in VAEs.\", \"*KL Divergence Derivation* We have included a derivation of the analytic KL divergence term we present and use in our ELBO in section 3.1.\", \"*More detailed discussion on interpretation* We have extended the discussion in section 4.3 to make clearer the intuition behind the expected improved interpretation of VSC.\", \"*Supplementary on Sampling* We have added a supplementary section (E5) showing the difference in ancestral sampling between VAEs and VSC and the ability of VSC to perform conditional sampling.\", \"We will apply some further modifications in response to other remarks, but would invite the reviewers to comment on these main updates and the current version of the paper.\"]}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"1 \\u2013 *Sampling from the Prior* Sampling straight from the prior is not expected to produce as good quality samples as a standard VAE, since data samples are not well represented by just any combination of sparse features. However, samples from a PDF which has the Spike distribution of a conditional posterior (encoding from one observation) and slab distribution of the prior, not only produces good synthetic samples, but it does so conditioned on the features present in the particular encoded observation. Through the conditional activation of only certain variables, VSC defines a sort of \\u201csub-generative model\\u201d for a given observation that models only the continuous sources of variations identified in the specific object and similar ones. For example, consider a VSC trained on fashion-MNIST; if we sample from the prior Gaussian, but only along the dimensions activated by the encoding of a t-shirt, we have a sub-generative model for t-shirts. We partially discuss this in section 4.3, however approaching it from a modification of encodings prospective rather than a sampling one. We realise the connection is not clear and we will add a discussion and experimental results either in the main body or appendix to clarify this important aspect.\\n\\n2 \\u2013 *Increasing the KL* Indeed representations found with VAEs do suffer from this known problem of ignoring latent representations. VSC does in part counteract this effect due to the discretisation from the spike variables, but is similarly affected by it. In our experiments we do not aim to obtain the best representations or classification accuracy achievable with our model, but rather compare to the standard VAE in order to highlight the difference between sparse and normally regularised latent vectors and the advantage in robustness when increasing the number of latent dimensions. The overall representation quality, and consequentially classification performance, can be improved at the expense of the ELBO value by increasing the coefficient of KL regularisation, as in beta-VAEs. By doing so, we get classification accuracies for MNIST above 90% for 5,000 labelled examples. We will add an experimental section in either the main body or the appendix where we compare this beta-VAE strategy for VAEs and VSCs and discuss how the VSC advantage varies as the beta coefficient is changed.\"}",
"{\"title\": \"Reply to Reviewer 2 Minor Comments\", \"comment\": [\"We hereafter address what we believe to be the most relevant minor comments:\", \"We thank the reviewer for the corrections on the definitions and references in sparse coding and VAEs. We will revise the relevant sections in the paper accordingly.\", \"Given a very large number of iterations and very small step size the standard VAE does approximately prunes extra dimensions when trained with a large latent space, still with some limited overfitting. However, within a limited iteration budget (of 20,000 iterations in our example) larger latent spaces fail to converge to a high enough value of the ELBO. The reason for the drop in classification performance is similar; with unlimited computational budget and many labelled examples the performance of VAEs is expected to only increase or stay stable as the latent space dimensionality increases, but with limited iterations and available labels overfitting and difficulty of convergence largely hinder performance.\", \"We feel that comparing to the beta-VAE or info-VAE may be interesting, but we do not aim to compare the representation performance with these methods as they explore the theme of interpretation in a perpendicular direction. For instance, it is perfectly plausible to build a beta-VSC by varying the sparse prior term as it is done with other priors in the beta-VAE.\", \"The paper \\u201cSparse Coding Variational Auto-Encoders\\u201d is indeed related to our work and we thank the reviewer for pointing it out. The scope is however different; a variational autoencoder approach is used to obtain better inference in linear sparse coding and using heavy-tailed PDFs as sparsity promoting priors. In our work, we aim at modelling sparse non-linear features of observations with the Spike and Slab prior.\"]}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"*Contextualisation* We thank the reviewer for pointing out the related work. We agree that a better contextualisation is needed to appreciate the contribution. We will therefore modify the introduction and related work sections to incorporate relevant papers to the existing VAEs methods and different latent space priors.\\n\\n*Novelty* We do not claim novelty of the re-parametrisation trick for binary variables alone and we will cite the appropriate work as advised. However, we are unable to find in the literature referenced by reviewer 2 (and in general) the derivation of an analytic form for the general discrete mixture-Spike and Slab KL divergence (reported in section 3.1 and derived in appendix B of our paper).\\n\\nAs we may be missing the relevant sections of the cited literature, we kindly ask reviewer 2 if he/she could refer to the specific pages or equations that detail an analytic form for the discrete mixture-Spike and Slab KL divergence we present in our paper ?\\n\\nIn the mentioned works, we observe the following:\\n\\n-\\tIn \\u201cDiscrete Variational Auto-encoders\\u201d by Rolfe, the KL divergence term of the ELBO for a recognition function that models dependences between continuous and discrete variables is estimated and derived stochastically as detailed in appendix F. In our work, we derive directly an exact analytic discrete mixture-Spike and Slab KL divergence that induces sparse regularisation which does not require stochastic sampling to be estimated.\\n\\n-\\tIn Yarin Gal\\u2019s thesis, the approximate posterior distribution q is the product of an approximation to an optimal posterior component obtained by moment matching and the prior itself (see p.124). The KL divergence between such approximate posterior and a Spike and Slab prior is then reported in appendix C. Because the approximate posterior q contains the prior p, this KL divergence is different and arguably simpler to compute analytically than the one we present in our paper; the prior simplifies inside the logarithm leaving the cross entropy between the approximate posterior (which contains a Spike and Slab) and the moment matched Gaussian (see p.159). In our work we derive a general discrete mixture-Spike and Slab KL divergence that works for any discrete-continuous mixture distribution recognition function.\\n\\n-\\tIn \\u201cWeighted Uncertainty in Neural Networks\\u201d by Blundell et al. (if this is the paper reviewer 2 is referring to) the proposed prior is a scale mixture of two gaussians which resembles the Spike and Slab distribution (section 3.3) and the KL divergence is computed stochastically along with the rest of the ELBO (equation 2). While in this work the KL divergence is estimated by sampling from a general posterior q, we derive an exact analytic form for the KL divergence between a discrete mixture recognition function and a Spike and Slab prior.\\n\\n*Comparison with other VAE models* Experimental comparison with other priors presented in previous work would indeed be interesting. However, we point out that in our evaluation we aim to study the effect of sparsity in the latent space of a VAEs and show the characteristics of sparse representations rather than demonstrate a new method that performs better than previous ones in some settings. The comparison is drawn with respect to the standard VAE to clearly show how sparse latent representations differ from normally regularised ones and give the reader a clear intuition of what effects may be expected when inducing sparsity in the latent space and where it might be useful to do so in other models.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We thank the reviewer for the comments and we are glad to find that some main points and advantages of our proposed method were recognised; inducing sparsity in the latent space of a VAE in order to find non-linear sparse codes that can constitute useful inputs in semi-supervised learning and allow for interpretable control in the generation of data. The novelty concern is addressed in the general reply, while below we address each individual concern:\\n\\n*Need for Cross-Validation* When finding useful latent representations for controlled generation and classification tasks there is no need to cross validate the sparsity parameter of the prior. In our experiments we set this parameter to a sufficiently low value (0.01) such that the regularisation term of the ELBO essentially induces the latent variables to be always zero and the reconstruction term induces only the variables it needs to reconstruct samples to be active. This effect occurs for any sufficiently low value of the prior sparsity parameter. This is shown in Fig.11 in the appendix, where for values of alpha lower than 0.1 the classification accuracy is steadily high. \\n\\n*Advantage with More Labels* The advantage is more pronounced at lower number of available labels and is especially useful in semi-supervised settings. However, the advantage is still present at higher regimes of labelled data; In figure 11 the blue line is the classification performance as a function of latent prior sparsity alpha for 20,000 labelled examples (1/3 of the examples used to train the VSC). At alpha=1, approximately corresponding to a standard VAE, the classification accuracy is ~81% for MNIST and ~72% for Fashion-MNIST, while it is ~88% and ~80% at alpha<0.1. We will clarify this point in the revised version of the paper.\\n\\n*Interpretation* The discussion on this aspect of sparse latent spaces is particularly interesting and we hope to initiate a conversation on it as well as study it formally in future work. \\nWe do not explicitly induce interpretation. However, sparsity in the latent space does result into a higher expectation of interpretability in large latent spaces, provided that the sources of variations in the observed data can be considered sparse (many possible features are present in the ensemble but only small subsets of them are present in each individual example). \\nConsider a VAE with a large dimensionality of latent space. The model will cluster distinct objects in different regions of the latent space and controlled generation is possible by interpolating between the regions of an aggregate posterior. However, given the encoding for one single example, the direction in which to move to modify interpretable aspects of the generation is difficult to find; there are many normally distributed latent variables and interpretable changes may or may not be caused by altering any combination of these. Of course, it is possible to improve the expected interpretation of altering elements by lowering the dimensionality of the latent space, but this also reduces the capacity of the model and hiders the ability of modelling data that may present a large number of features in its aggregate.\\nVSC aims at modelling data which presents few features in individual examples, but many in the data aggregate. When encoding a single example the vector we obtain only has a small subset of active features and we can expect these few dimensions to control the continuous variables that represent relevant sources of variation for this example and similar objects, while ignoring others by setting them to zero. In such a way the sub-space of smoothly variable features relevant to each encoded example is defined by the encoding itself. At the same time, the model retains the capacity to describe complicated data ensembles by being able to use different sparse elements for different examples.\\nWe realise this may not be very clear in the current version of the paper and we will make such theme a central point in the discussion of section 4.3.\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their thoughtful comments. We notice that the reviewers are mainly concerned with the novelty of our approach and the resulting algorithms. Our response to this criticism is as follows:\\n\\n*Novel and Non-Trivial Analytic Contribution*: The derived ELBO is elegant and the resulting implementation intuitive; however, it is rigorously derived in a way that is not at all obvious, trivial or known in the literature (to the best of our knowledge; see individual replies for details). We derive directly an analytic expression for a discrete mixture-Spike and Slab KL divergence (reported in section 3.1 and derived in appendix B) which results in closed form variational sparse inference in a continuous space that gives rise to a distinctly different and simpler algorithm than previous approaches. We will include a concise version of the KL derivation of appendix B in the main body of our revised paper.\\n \\n*Intuitive and Generalisable Approach*: Formulating the problem in analogy to the original VAE is intentional for clarity and focus of scope. We present a general formulation of sparse inference with VAEs that can be a powerful tool to obtain sparse representations and is extendable in different directions. The presented ELBO can be incorporated in more elaborate models that aim to infer sparsity thus it is not a stand-alone model to solve a specific problem.\\n\\nWe will revise our paper to clarify these key points and relate the contribution more explicitly to previously works (e.g. outlined by R2) which have similar overall goals but approach the problem in distinctly difference ways.\\n\\nWe will post replies to each individual reviewer. We aim to present an updated version of our paper by this Friday, 16th November, but would invite the reviewers to comment on our feedback and current plans as soon as possible.\"}",
"{\"title\": \"Potentially Interesting\", \"review\": \"in this work the authors propose to replace Gaussian distribution over latent variables in standard variational autoencoders (VAEs) with a sparsity inducing spike-and-slab distribution. While taking the slab to be Gaussian, the authors approximate the spike with a scaled sigmoid function, which is then reparameterized through a uniform random variable. The authors derive an extension of the VAE lower bound to accommodate KL penalty terms associated with spikes. The variational lower bound (VLB) is optimized stochastically using SGD (with KL-divergence computed in closed form). Results on benchmarks show that as compared to standard VAE, the proposed method achieves better VLB for higher number of latent dimensions. Classification results on latent embeddings show that the proposed method achieves stable classification accuracy with increasing number of latent dimensions. Lastly the authors visualize sampled data to hint that different latent dimensions may encode interpretable properties of input data.\", \"originality_and_significance\": \"In my opinion, the approach taken in this work does not constitute a major methodological advancement; the VLB authors derive is a relatively straight-forward extension of VAE's lower bound.\", \"pros\": \"The paper is well-written and easy to follow. \\nThe idea of having a sparse prior in latent space is indeed relevant, \\nThe approximation and reparameterization of the spike variable is however functionally appealing. \\nPotentially useful for semi-supervised learning or conditional generative modeling.\", \"concerns\": \"The authors show various empirical results to highlight the performance of their approach, but I am still not sure where it is best to use sparse embeddings that are induced by the proposed approach vs. those of standard VAE (or other of its sparse variants e.g., rectified Gaussian priors by Tim Salimans). For instance in all experiments VAE seems to be competitive or better for low-dimensional latent space, so one may ask, why is it necessary to go to a higher number of latent variables? In a VAE setup, one can simply tune the number of latent dimensions through cross-validation, as one would probably need to do to tune the prior sparsity parameter in the proposed method. \\n\\nI am also wondering if the disparity between VAE and proposed method w.r.t. classification performance for increasing number of latent dimensions vanishes as more labeled data is used for training? Fig. 11 in appendix seems to indicate that. \\n\\nLastly I am not sure how we can expect to always converge to interpretable encodings since there is nothing explicit in the objective function to encourage interpretable solutions. Perhaps samples such as those shown in the paper can also be generated by modulating VAE embeddings?\\n\\nMaybe the proposed approach offers potential for tasks such as semi-supervised learning or conditional generative modeling, but the current set of empirical results does not allow one to draw any conclusions there.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Straightforward extension of VAEs to sparse priors\", \"review\": \"This paper proposes an extension of VAEs with sparse priors and posteriors to learn sparse interpretable representations. Training is made tractable by computing the analytic KL for spike and slab distributions, and using a continuous relaxation for the spike variable. The technique is evaluated on MNIST, Fashion MNIST, and CelebA where it learns sparse representations with reasonable log-likelihood compared to Gaussian priors/posteriors, but improved classification accuracy and interpretability of the representation.\\n\\nWhile this paper is clear and well written, the novelty of the approach is limited. In particular, this is a straightforward application of vanilla VAEs with a different prior/posterior. The authors missed a bunch of related work and their main theoretical contributions are known in the literature (KL for spike and slab distributions, effective continuous relaxations for Bernoulli variables). The experiments are interesting but the authors should compare to more baselines with alternative priors (e.g. stick breaking VAEs, VampPrior, epitomic VAEs, discrete VAEs).\\n\\nStrengths\\n+ Well written, clear, and self-contained paper. Figures are nice and polished.\\n+ Thorough experiments studying the effect of sparsity on the representation\\n\\nWeaknesses\\n- No discussion/comparison to other VAE approaches that incorporate sparsity into the latents: Eptimoic VAEs (2017), discrete VAEs with binary or categorical latents are sparse (see: Discrete VAEs, Concrete/Gumbel-Softmax, VQ-VAE, output-interpretable VAEs), stick breaking VAEs, structured VAEs for the Beta-Bernoulli process (Singh, Ling, et al., 2017). Missing citation to foundational work on sparse coding from Olshausen and Field (1996). \\n- Lack of novelty: The analytic KL term for spike and slab priors has been derived before in Discrete VAEs (Rolfe, 2017) and in work on weight uncertainty (Yarin Gal's thesis, Blundell et al. 2016). Continous relaxations like the one used for the spike variable has been presented in earlier work (Concrete distributon, Gumbel-Softmax, Discrete VAEs).\", \"minor_comments\": [\"Eq. 1, shape for B should be MxJ\", \"Cite Rezende & Mohamed for VAEs along w/ Kingma & Welling\", \"Definition of VAE is overly-restrictive. Typically a VAE is the combo of variational inference with an amortized inference network (and optionally reparameterization gradients). Saying that VAE implies Gaussian prior and Gaussian posterior is far too restrictive.\", \"VLB is a non-standard acronym, use ELBO for evidence lower bound\", \"I'm surprised that VAEs perform so poorly as latent dim increases. I'd expect it to just prune latent dimensions. Do you have an explanation for why performance drops for VAEs? Are they overfitting?\", \"VAEs with Gaussian p(x|z) are typically harder to train and more sensitive to hyperparameters than Bernoulli p(x|z). Could you repeat your experiments using the more common binarized MNIST so that numbers are comparable to prior work?\", \"If the goal is to learn representations with high information, then beta-VAEs or InfoVAEs should be compared (see analysis in Alemi et al., 2017). The number of dimensions may matter less for classification than the rate of the VAE. To analyze this further, you could plot the rate (KL(q(z|x) || p(z)) vs. the classification accuracy for all your models.\", \"Fig 4: consider adding in plots of continuous interpolation of the latent dimension (as in beta-VAE, TC-VAE, etc.)\", \"Would be interested to see how much class information is stored in the value vs. the pattern of non-zeroes in the latent representation (as done in Understanding Locally Competitive networks from Srivasta et al. 2014).\", \"Not at all expected as this came out after your submission, but would be nice to compare to a similar recent paper: https://www.biorxiv.org/content/early/2018/08/23/399246\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interpretable VAE with sparse coding\", \"review\": \"This paper presents variational sparse coding (VSC). VSC combines variational autoencoder (VAE) with sparse coding by putting a sparse-inducing prior -- the spike and slap prior -- on the latent code z. In doing so, VSC is capable of producing sparse latent code, utilizing the latent representation more efficiently regardless of the total dimension of the latent code, meanwhile offers better interpretability. To perform traceable inference, a recognition model with the same mixture structure as the spike and slap prior is used to produce the approximate posterior. Experimental results on both MNIST and Fashion-MNIST show that even though VSC performs comparably worse than VAE in terms of ELBO, the representation it learns is more robust in terms of the total latent dimension in a downstream classification task. Additionally, the authors show that VSC provides better interpretability by interpolating the latent code and find that some dimensions correspond to certain characteristics of the data.\\n\\nOverall, the paper is clearly written and easy to follow. VSC is reasonably motivated and the idea behind it is quite straightforward. Technical-wise, the paper is relatively incremental -- all of the building blocks for performing tractable inference are standard: Since the posterior is intractable for nonlinear sparse coding, a recognition network is used; the prior is spike and slap, thus the recognition network will output parameters in a similar mixture structure with both a spike and a slap component; to apply reparametrization trick on the non-differentiable latent code, a continuous relaxation, similar to the one used in concrete distribution/Gamble trick, is applied to approximate the step selection function with a controllable \\\"temperature\\\" parameter. Overall, the novelty is not the strong suit of the paper. I do like the idea of VSC and its ability to learn interpretable latent features for complex non-linear models though. I have two major comments regarding the execution of the experiment that I hope the authors could address:\\n\\n1. It is understandable that VSC is not able to achieve the same level of ELBO with VAE, as is quite common in models which trade off performance with interpretability. However, one attractive property of VAE is its ability to produce relatively realistic samples right from the prior, since its latent space is fairly smooth. It is not clear to me if VSC has the same property -- my guess is probably not, judging from the interpolation results currently presented in the paper. It would be interesting if the authors could comment on this and maybe include some examples to illustrate it.\\n\\n2. As is known in some recent literature, e.g. Alemi et al. Fixing a broken ELBO (2018), VAE can be easily trained to simply ignore the latent representation, hence produce terrible performance on a downstream classification task. I don't know exactly how the data is processed, but on MNIST, an accuracy of less than 90% means it is quite bad (I can get >90% with PCA + logistic regression). I wonder if the authors have explored the idea of learning better representation by including a scalar in front of the KL term -- or if VSC is more robust to this problem of ignoring latent code.\", \"minor_comments\": \"\", \"a_potential_relevant_reference\": \"Ainsworth et al., Interpretable VAEs for nonlinear group factor analysis (2018).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Difference in Scope\", \"comment\": \"If you are referring to the paper from 2012 titled \\u201cLarge-Scale Feature Learning With Spike-and-Slab Sparse Coding\\u201d we would like to point out that our work is quite different in scope. In this work from 2012, the authors propose an efficient Spike and Slab variational inference method for linear sparse coding models; a Bayesian parallel to traditional sparse coding if you will. The aim there is to induce regularisation in the recovery of sparse codes, in turn improving the reliability of feature extraction in images when classifying with a low number of labelled examples available. To this end, in their classification evaluation, they divide the images in small patches (6x6 I believe) and use these to learn the dictionary of sparse features.\\n\\nThe aim of our work is to perform sparse variational inference with arbitrarily complicated non-linear mappings. By modelling non-linear sparse features we aim to obtain interpretable and useful latent representations while at the same time retaining the reconstruction/synthesis capability of generative models, rather than just extracting features. To make this tractable, we use the framework of VAEs (introduced in 2013, as mentioned by the commenter below). The inference we perform is significantly more computationally difficult than the feature extraction presented in the aforementioned previous work, mainly for two reasons:\\n\\n1)\\tThe model we use is non-linear, using neural networks in the mappings between latent and observation spaces, making variational inference way less tractable (hence the VAE approximate inference architecture).\\n\\n2)\\tWe don\\u2019t make any image assumption about the objects we model and use the raw entire images (MNIST and fashion-MNIST are 28x28=784 and our CelebA dataset is 32x32x3=3072 as opposed to the 6x6x3=108 pre-processed colour patches modelled in the previous paper).\\n\\nThe combination of these two aspects allows us to isolate few global and non-linear sparse features, such as facial traits and clothes characteristics, rather than large dictionaries of linear sparse features over image patches, containing lines and curves such as those shown in the paper from 2012.\\n\\nTo adapt our model specifically for classification of varied natural images (such as CIFAR) and benchmark against the strategy employed in the linear Spike and Slab inference paper, one could use convolutional encoding and decoding neural networks with pooling regions of appropriate size and some pre-processing. Though this may certainly be an interesting investigation, it is beyond the scope of the work we present here; in our evaluation we are interested in examining the effect of sparsity in the latent space on the performance of general non-linear representation models (in particular VAEs and we use fully connected layers for generality) and not specifically improve the accuracy or computational efficiency for image classification.\"}",
"{\"comment\": \"It would be very helpful to know, which works the previous commenter is exactly referring to. After all, the original paper on the VAE by Kingma and Welling was published only in 2013.\", \"title\": \"Citation\"}",
"{\"comment\": \"Previous work on variational inference for spike and slab sparse coding was evaluated on datasets such as CIFAR-10 and STL-10: datasets consisting of color photographs. That was 6+ years ago, when GPUs were significantly slower. If the proposed method actually works and scales, it should be possible to easily outperform papers from 2012 using modern hardware on the same benchmarks they used.\", \"title\": \"Should benchmark against prior work\"}"
]
} |
|
Hy4R2oRqKQ | Canonical Correlation Analysis with Implicit Distributions | [
"Yaxin Shi",
"Donna Xu",
"Yuangang Pan",
"Ivor Tsang"
] | Canonical Correlation Analysis (CCA) is a ubiquitous technique that shows promising performance in multi-view learning problems. Due to the conjugacy of the prior and the likelihood, probabilistic CCA (PCCA) presents the posterior with an analytic solution, which provides probabilistic interpretation for classic linear CCA. As the multi-view data are usually complex in practice, nonlinear mappings are adopted to capture nonlinear dependency among the views. However, the interpretation provided in PCCA cannot be generalized to this nonlinear setting, as the distribution assumptions on the prior and the likelihood makes it restrictive to capture nonlinear dependency. To overcome this bottleneck, in this paper, we provide a novel perspective for CCA based on implicit distributions. Specifically, we present minimum Conditional Mutual Information (CMI) as a new criteria to capture nonlinear dependency for multi-view learning problem. To eliminate the explicit distribution requirement in direct estimation of CMI, we derive an objective whose minimization implicitly leads to the proposed criteria. Based on this objective, we present an implicit probabilistic formulation for CCA, named Implicit CCA (ICCA), which provides a flexible framework to design CCA extensions with implicit distributions. As an instantiation, we present adversarial CCA (ACCA), a nonlinear CCA variant which benefits from consistent encoding achieved by adversarial learning. Quantitative correlation analysis and superior performance on cross-view generation task demonstrate the superiority of the proposed ACCA. | [
"Canonical Correlation Analysis",
"implicit probabilistic model",
"cross-view structure output prediction"
] | https://openreview.net/pdf?id=Hy4R2oRqKQ | https://openreview.net/forum?id=Hy4R2oRqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1e10J2xl4",
"BJlmc3EtyE",
"Bke-nsVt14",
"rkeP0gxSyV",
"rklmIENEJ4",
"rJe19Og4kE",
"rJgpPvg41V",
"BkeMHiIZ1N",
"BJg5zj0eJN",
"H1enRDXtRQ",
"r1lKaUmtAX",
"Byex_m7F0Q",
"HkgJh-QKAX",
"HygXIrnhn7",
"HklJ3TmqnX",
"Sye9zUNbsm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544761286654,
1544273035049,
1544272809411,
1543991503350,
1543943243369,
1543927942558,
1543927653453,
1543756601830,
1543723793792,
1543219155769,
1543218881162,
1543218024210,
1543217574886,
1541354827216,
1541189030636,
1539552786500
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper761/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/Authors"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper761/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This manuscript proposes an implicit generative modeling approach for the non-linear CCA problem. One contribution is the proposal of Conditional Mutual Information (CMI) as a criterion to capture nonlinear dependency, resulting in an objective that can be solved using implicit distributions. The work seems to be well motivated and of interest to the community.\\n\\nThe reviewers and AC opinions were mixed, and the rebuttal did not completely address the concerns. In particular, a reviewer pointed out an issue with a derivation in the paper, and the issue was not satisfactorily resolved by the authors. Some additional reading suggests that the misunderstanding may be partially due to incomplete notation and other issues with clarity of writing.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"Part 2: explanation on the motivation of our work.\", \"comment\": \"Second, the motivation of our work can be explained as follows.\\n\\nBased on an in-depth discussion on the restrictive assumptions on the probabilistic interpretation of linear CCA (PCCA), we aim to re-decide the criteria to generalize the probabilistic understanding to complex nonlinear CCA models and relax the assumptions.\", \"the_cmi_criteria\": \"We first analyze that minimum CMI is a reasonable criteria that can overcome the limitations of PCCA. Then, we derive an equality (Eq.6) that presents the connection between the expectation of marginal log-likelihood, the CMI and the expected reconstruction error. Based on this equation, we derive a surrogate objective (Eq.7) that can implicitly achieve the proposed minimum CMI criteria. With this objective, the explicit data distribution assumptions are avoided.\", \"the_icca_framework\": \"As the $p(z|x, y)$ in Eq.7 is unknown for practical problems, approximate inference methods can be adopted to solve the optimization problem. \\nSpecifically, If variational inference method is adopted, the ELBO derivation are as follows:\\n**********\\n\\\\begin{eqnarray}\\n\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})}\\\\log p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}) \\n&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [\\\\log p(\\\\mathbf{x},\\\\mathbf{y})\\\\int_{\\\\mathbf{z}}q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}) d\\\\mathbf{z}] \\\\\\\\ \\n&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [ \\\\int_{\\\\mathbf{z}}q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})\\\\log [\\\\frac{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})}{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})}} \\\\cdot \\\\frac{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}|\\\\mathbf{z})}}{ p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z})} \\\\cdot p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) \\\\cdot p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z}) \\\\cdot \\\\frac{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z})}}{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})}] d\\\\mathbf{z}] \\\\\\\\ \\n&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} D_{KL}(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})) \\\\\\\\ \\n& & + \\\\ \\\\textcolor{blue}{\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [\\\\mathbb{E}_{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})} \\\\textcolor{red}{\\\\log p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}|\\\\mathbf{z})} - D_{KL}{(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}))}] \\\\Rightarrow \\\\mathcal{L}_{1}(\\\\mathbf{x},\\\\mathbf{y};\\\\mathbf{\\\\theta},\\\\mathbf{\\\\phi})} \\\\\\\\ \\n&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} D_{KL}(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})) \\\\\\\\ \\n& & + \\\\iint \\\\ [\\\\int_{\\\\mathbf{z}}p(\\\\mathbf{x},\\\\mathbf{y}){q_{\\\\phi}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})} \\\\log {\\\\frac{p(\\\\mathbf{x},\\\\mathbf{y}|\\\\mathbf{z})}{p(\\\\mathbf{x}|\\\\mathbf{z})p(\\\\mathbf{y}|\\\\mathbf{z})}} d\\\\mathbf{z}] d\\\\mathbf{x}d\\\\mathbf{y}, \\\\\\\\ \\n& &+ \\\\ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [\\\\mathbb{E}_{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})} \\\\log [p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) + \\\\log {p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z})}] - D_{KL}{(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}))}]\\n\\\\end{eqnarray}\\n**********\\nFor the last equation, we can see that based on the KL-divergence that constrains the approximation of $p(z|x, y)$ and $q(z)$, The second RHS term can be regarded as the approximation of CMI. Consequently, adopting variational inference for the approximation, our objective (substituting $p(z|x,y)$ with $q(z)$) can implicitly achieve the minimum CMI criteria proposed for CCA. \\n\\nFurthermore, considering the difference between the last equation and $ \\\\mathcal{L}_{1}(\\\\mathbf{x},\\\\mathbf{y};\\\\mathbf{\\\\theta},\\\\mathbf{\\\\phi})} $, it is verified that with the adopted CMI criteria, the conditional independent assumption is avoided.\", \"the_acca_instantiation\": \"Considering the limitation of misaligned encoding of VCCA, we design ACCA with adversarial learning technique based on Eq.11.\\nSpecifically, apart from the first two terms that are naturally required by the objective of CCA, We further introduce $q(z|x, y)$ to provide holistic information for the joint data distribution $p(x, y)$. Then, considering the difficulty in modeling these approximations individually, we constrain all these three encodings, $q(z|x)$ $q(z|y)$ and $q(z|x, y)$, to be similar to the prior p(z). In this way, the approximation of these encodings is implicitly satisfied.\\n\\nBased on the aforementioned analysis, proposed ACCA is formulated with Eq. 12, which conforms with both our formulation of ICCA in Eq.7 and the network design in Figure 2. \\n\\nIn the end, thanks again for all the reviewers\\u2019 valuable comments and constructive suggestions on our work. we will strive to make this work a better one.\"}",
"{\"title\": \"Corrections on Eq.(6) and response to the common concerns about our motivation.\", \"comment\": \"In this post, we first address some misleading typos in our submission, then we give a detailed explanation on the motivation of our work.\\n\\nFirst, we express sincere thanks to the AnonReviewer1 for pointing out the misleading typos on Eq.6 in our current submission. \\n\\nThe correct form of the equation would be\\n$\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\log {p(\\\\mathbf{x},\\\\mathbf{y})} = I{(\\\\mathbf{X};\\\\mathbf{Y}|\\\\mathbf{Z})} - \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\mathcal{F}(\\\\mathbf{x},\\\\mathbf{y})$\\nwhich is consistent with our consistent with our derivation in Appendix 7.\\n \\nThe correct explanation in the text above this equation would be \\u201cthe expectation of the marginal log-likelihood\\u201d, which can also be regarded as the joint entropy of X, Y.\"}",
"{\"title\": \"Thank you very much for pointing out the ambiguous descriptions in our submission.\", \"comment\": \"Yes. We admit the typos in the submitted version. The Eq.6 would be consistent with our derivation in Appendix 7.\\n\\nThe correct form of the equation would be \\n$\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\log {p(\\\\mathbf{x},\\\\mathbf{y})} = I{(\\\\mathbf{X};\\\\mathbf{Y}|\\\\mathbf{Z})} - \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\mathcal{F}(\\\\mathbf{x},\\\\mathbf{y})$\\n\\nAnd the correct explanation in the text would be \\u201cthe expectation of the marginal log-likelihood\\u201d. \\n\\nConsidering the corrected equation, as supported by the proof in Appendix 7, the equality presents the connection between the expectation of marginal log-likelihood, the CMI criteria, and the expected reconstruction error. The expectation of $\\\\log p(X, Y)$ would be a constant with respect to the generative parameters\\u201d. That is why we claim that \\u201cthe minimization of CMI can be implicitly achieved by optimizing Eq.7 \\u201d. Consequently, Eq.7 is presented as a surrogate objective that implicitly leads to the CMI criteria.\\nThe connection between Eq.7 and CMI can also be supported by our explanation on the ELBO derivation posted in the previous response. \\n\\nConsidering practical tasks, for pairwise data, the expectation would be the average of the marginal log-likelihood on the training set $\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})}\\\\log p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}) = \\\\frac{1}{N} \\\\log p(\\\\mathbf{X}, \\\\mathbf{Y})$.\\n\\nThis coincides with our practical objective of ACCA. \\n$\\\\min \\\\sum_{i=1}^{N}\\\\frac{1}{N}\\\\mathcal{F}_{\\\\rm ACCA} (\\\\mathbf{x},\\\\mathbf{y})$\\nwhere $N$ denotes the number of data pairs. And $\\\\mathcal{F}_{\\\\rm ACCA} (\\\\mathbf{x},\\\\mathbf{y})$ is presented with Eq.12.\"}",
"{\"title\": \"---\", \"comment\": \"In the text above equation (6) you refer to the log-likelihood rather than its expectation. Once you replaced the log-likelihood with its expectation, what kind of statistical estimation/inference technique(s) do you use and why would this imply that *minimization* of CMI is the way to go?\"}",
"{\"title\": \"The constraints of ACCA are adopted based on our formulation, but relaxing the constraint indeed improves the log-likelihood in the test set.\", \"comment\": \"Thanks for your interest and support.\\n\\nYour understanding of our design of ACCA is correct. However, for our ACCA, the constraints are adopted based on our formulation of ICCA ( Eq.7), and it is not \\u201cover constrained\\u201d. \\n\\nSpecifically, for Eq.11, the first three terms are required to be satisfied based on the objective of CCA. However, in practical, these three terms are complicated and difficult to be modeled individually.\\nInspired by the variational inference, which can drive the approximation of $q(z)~p(z)$, We further adopt specially-designed GAN structure, to drive $q_{x}(z)$, $q_{y}(z)$ and $q_{xy}(z)$ to approximate the same prior $p(z)$ simultaneously. In this way, the approximation of $q_{x}(z)$, $ q_{y}(z)$ and $q_{xy}(z)$ required by CCA, is implicitly satisfied.\\n\\nFor additional experiments, the log-likelihood obtained without the constraint of \\\"q(z|x,y) \\\\approx p(z)\\\" for ACCA(GM) is $94.912$ (comparable with that of ACCA(G)). This shows that relaxing the constraint improves the log-likelihood in the test set, which coincides with the statement that \\u201cflexible posteriors would improve the ELBO of generative models.\"}",
"{\"title\": \"Thanks for the constructive comments. We are sorry for the indistinct notations that confused your understanding, but this do not indicates flaw of our motivation.\", \"comment\": \"For the Left Hand Side (LHS) term of Eq.6, we improperly adopt $\\\\log p(\\\\mathbf{X}, \\\\mathbf{Y})$ to indicate the expectation of the marginal likelihood (LHS term of Eq.18). The exact form of Eq. 6 would be\\n\\n$\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\log {p(\\\\mathbf{x},\\\\mathbf{y})} =\\nI{(\\\\mathbf{X};\\\\mathbf{Y}|\\\\mathbf{Z})} - \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p(\\\\mathbf{x},\\\\mathbf{y})} \\\\mathcal{F}(\\\\mathbf{x},\\\\mathbf{y})$\\n\\nWe admit that this is an indistinct notation that confused your understanding. But this does not influence the soundness of our motivation.\\n\\nFor your second concern, we do not claim that Appendix 7 presents the ELBO. Actually, we simply analogize the derivation of the ELBO in [1], and derive an equality that presents the connection between the expectation of marginal log-likelihood, the CMI and the expected reconstruction error. Based on the derived equality, we obtained a surrogate objective that implicitly leads to the CMI criteria.\\n\\nAs stated in \\u201c ICCA as a framework \\u201d, since the $p(z|x, y)$ is hard to infer for practical problems, approximate inference methods are to be adopted to instantiate practical models. Formulation of the model would compose of two parts:1). our objective in Eq.7 , with $p(z|x,y)$ substituted by $q(z|*)$; 2). constraint for the approximation of $p(z|x, y)$ and $q(z|*)$. \\n\\nSpecifically, if variational inference is adopted for the approximation, the connection between our objective and ELBO can be given as follows.\\n\\n**********\\n\\\\begin{eqnarray}\\n\\t\\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})}\\\\log p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}) \\n\\t&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [\\\\log p(\\\\mathbf{x},\\\\mathbf{y})\\\\int_{\\\\mathbf{z}}q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y}) d\\\\mathbf{z}] \\\\\\\\ \\\\nonumber\\n\\t&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [ \\\\int_{\\\\mathbf{z}}q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})\\\\log [\\\\frac{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})}{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})}} \\\\cdot \\\\frac{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y}|\\\\mathbf{z})}}{ p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z})} \\\\cdot p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) \\\\cdot p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z}) \\\\cdot \\\\frac{{p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z})}}{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})}] d\\\\mathbf{z}] \\\\\\\\ \\\\nonumber\\n\\t&=& \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} D_{KL}(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})) \\\\\\\\ \\\\nonumber\\n\\t& & + \\\\iint \\\\ [\\\\int_{\\\\mathbf{z}}p(\\\\mathbf{x},\\\\mathbf{y}){q_{\\\\phi}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})} \\\\log {\\\\frac{p(\\\\mathbf{x},\\\\mathbf{y}|\\\\mathbf{z})}{p(\\\\mathbf{x}|\\\\mathbf{z})p(\\\\mathbf{y}|\\\\mathbf{z})}} d\\\\mathbf{z}] d\\\\mathbf{x}d\\\\mathbf{y}, \\\\\\\\ \\\\nonumber\\n\\t& &+ \\\\ \\\\mathbb{E}_{\\\\mathbf{x},\\\\mathbf{y} \\\\sim p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x},\\\\mathbf{y})} [\\\\mathbb{E}_{q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})} \\\\log [p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{x}|\\\\mathbf{z}) + \\\\log {p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{y}|\\\\mathbf{z})}] - D_{KL}{(q_{\\\\mathbf{\\\\phi}}(\\\\mathbf{z}|\\\\mathbf{x},\\\\mathbf{y})\\\\parallel p_{\\\\mathbf{\\\\theta}}(\\\\mathbf{z}))}]\\n\\\\end{eqnarray}\\n\\n**********\\n\\nWe can see that the first RHS term is the KL-divergence that constrains the approximation of \\n$p(z|x, y)$ and $q(z|*)$. As $D_{KL}\\\\geq 0$, the rest two terms composes an ELBO of the marginal log-likelihood. The second RHS term can be regarded as the approximation of CMI. And the third term is our derived objective in Eq.7.\\n\\nWe can see that, if the $D_{KL}=0$ is satisfied, the second term would be CMI, which is guaranteed to be non-negative. This makes our objective an ELBO for the problem. This also indicates that CMI is the criteria that is behind this variational formulation. \\n\\nThe derivation also coincides with our formulation of ACCA, in which we incorporate three $q$ functions to approximate $p(z|x, y)$ through GAN structure. As verified in \\u201ccorrelation analysis\\u201d section, proposed methods captures superior nonlinear dependency compared with other baselines.\"}",
"{\"title\": \"Why constraining the variational posterior q(z|...) be similar to a simple prior p(z)?\", \"comment\": \"I appreciate the authors' detailed response. I like this work and would recommend acceptance. I still have one question on Equation (11).\\n\\nIn principle, more flexible posteriors would improve the ELBO of generative models. For example, the focus of black-box variational inference is to design flexible posterior distributions through GAN or normalizing flow. However, in Equation (11), the variational posterior q(z|x), q(z|y), q(z|x,y) are constrained to be similar to the prior p(z), which is just a simple distribution. Surprisingly, following the current formulation, ACCA with Gaussian prior (rather than Gaussian mixture or more flexible priors) results in the highest test log-likelihood. Would removing the constraint of \\\"q(z|x,y) \\\\approx p(z)\\\" in Equation (11) improve the log-likelihood in the test set?\"}",
"{\"title\": \"Major Flaw\", \"comment\": \"Hi,\\n\\nHaving read the responses and the paper once more, I don't change my original negative rating. I believe that there is a conceptual flaw which can't be fixed.\\n\\nBasically, I don't see why equation (6) would be accurate. Indeed, the derivations (equations (17)-(19) in Appendix 7) are provided for the *expectation of the likelihood.* Equation (20) doesn't imply equation (6), where some of the expectations disappear for no reason. Note that equation (6) is the main motivation for the overall approach.\\n\\nWhile the authors refer to [1] in their response to my respective question, their derivation in Appendix 7 have nothing to do with the ELBO simply because there is no bound. The procedure they refer to implies a variational approximation to the true posterior in order to lower bound an intractable likelihood. However, the derivations in Appendix 7 have no mention of any variational distribution.\\n\\nBest,\"}",
"{\"title\": \"Response to Reviewer#2\", \"comment\": \"Thanks for your constructive comments and suggestions for improvement of our work.\\n\\nWe have made revisions in the adapted version for a better understanding of our work. \\n1). Section 1 and section 3 are revised for clarity of our work; \\n2). Figure1 is further explained in the caption; \\n\\nFurthermore, our primary objective for MNIST cross-view generation task is to show the multi-view consistency achieved by the proposed ACCA. This VAE generation is consistent with our formulation (section 4.3) and structure design (Figure2) of ACCA. Consequently, the quality of the generated images is not the major consideration in this part. However, we can easily adopt GANs, e. g. VAE- GANs, to improve the quality of the generated images of our model. \\n\\nWe will address your suggestion about the experiments in the future version.\"}",
"{\"title\": \"Response to Reviewer#3\", \"comment\": \"Thanks for your constructive comments and suggestions for improvement of our work. Below, we answer your questions detailedly.\\n\\nA--1). In ACCA_NoCV, we use Gaussian prior- the same prior as Bi-VCCA. Although it does not use the complementary view, it can outperform Bi-VCCA because the adversarial learning scheme can still provide a consistent marginalization for the two inference models, which alleviates the misaligned encoding problem in Bi-VCCA. Specifically, different from the KL divergence which matches the conditional distribution of z with individual points in each view, adversarial learning drives the approximation of the marginalized distributions of z for the two views (as illustrated with Eq. 10 and Eq. 11). As the variable in each view is marginalized out, this approximation is more robust and can achieve consistent encoding for the two views. \\n\\nAs is presented with section 7.2 in the Appendix, Figure 5 obviously shows that $z_{y}$ of Bi-VCCA fails to show good clustering results. The comparison between that of $z_{x}$ indicates that Bi-VCCA suffers from a misalignment encoding for the incorporated two views. While for ACCA_NoCV in Figure 6, the clustering result presents better alignment for the two views compared with that of Bi-VCCA, which indicates that the adopted adversarial learning scheme benefits the consistent encoding for the two views.\\n\\n\\nA--2). In the reported experiments, the prior distributions are all set to be Gaussian Mixture and the parameters of the prior are specified in advance. The intention of this setting is to initially verify that better performance would be obtained in generative CCA models when given the suitable prior distribution (In table 4, higher correlation is captured with non\\u2010Gaussian prior). This verifies that ACCA achieves superiority for handling implicit distributions compared with VCCA. \\n\\nA--3). For the same setting on the nHSIC computation, the average negative log-likelihood achieved with each encoding on the test set of Bi-VCCA, ACCA_NoCV, ACCA (G), ACCA (GM) are 112.75, 107.26, 94.41 and 103.10 respectively. Consequently, regarding the log-likelihood, ACCA (G) achieves the best result, while Bi-VCCA achieves the worst. \\n\\nA--4). $q(z|x)$ and $q(z|y)$ are the two principle encodings of CCA. Without them, analysis on the multi-views of CCA, such as correlation analysis and cross-view generation, cannot be conducted.\\n\\nActually, the model with the single encoding of $q(z|x,y)$ can be adopted to learn a common embedding for multi-view data, the variant model is worth to be further studied for multi-view embedding task. \\n\\n\\nA--5). The introduction of adversarial training indeed increases the training time of ACCA (1503s vs 2806s), but the increase is tolerable considering its superiority on the result.\"}",
"{\"title\": \"Response to Reviewer#1 (Part 2)\", \"comment\": \"For additional comments:\\n\\nA--1) Thanks for your suggestion. We have added the reference in the adapted version. \\nThe paper provides an in-depth probabilistic discussion on linear CCA based on Gaussian assumptions, which greatly deepens the understanding CCA. The restrictions of this work (the bullet points in Page 2) inspired us to re-decide the criteria to deepen the understanding of complex nonlinear CCA models and relax the assumptions. \\n\\nA--2). In this paper, nonlinear dependency indicates high-order dependency in the statistical sense. For classic CCA, it adopts linear correlation metric to estimate the linear dependency (second-order statistics) between the variables. However, this metric is insufficient to analyze complex practical data which contains higher-order dependency. This is because linear correlation is only an ideal criteria for CCA with Gaussian assumption on the data distributions, which do not hold in general. Consequently, in this work, to capture nonlinear dependency in CCA, we adopt mutual information as the metric, which is a generalized correlation measurement that can handle nonlinear dependency between two random variables. \\n\\nThe non-linearity of the proposed ACCA stems from our motivation to capture nonlinear dependency. Instantiated within ICCA, ACCA also implicitly achieves the $min I(X;Y|Z)$ criteria. For pointwise mutual information for Gaussian distributed data, the proposed criteria is related to linear correlation with the following equation. $ I{(\\\\mathbf{X};\\\\mathbf{Y}|\\\\mathbf{Z})} = \\\\log \\\\frac{1}{1-r^{2}}$, where $r$ denotes the linear correlation between $q(z|x)$ and $q(z|y)$. \\n\\nHowever, as we deal with multi-view learning with implicit distributions in ACCA, its connection to the linear independence is inconclusive.\\n\\n\\nA--3). We use \\u201clinear correlation criterion \\u201d to indicate correlation measurement which can only estimate the linear dependency between the variables. As it is presented in section 3.1 of the revised version, for CCA and PCCA, the criterion is defined as follows. $\\\\rho = \\\\max\\\\limits_{W_{x},W_{y}} \\\\frac{W_{x}^{'}{\\\\Sigma}_{xy}W_{y}} {\\\\sqrt{W_{x}^{'}{\\\\Sigma}_{xx}W_{x}{W_{y}^{'}{\\\\Sigma}_{yy}W_{y}}}}$, where {\\\\Sigma}_{xx} and {\\\\Sigma}_{yy} denote the covariance of X and Y respectively, and {\\\\Sigma}_{xy} denotes the cross-covariance of X and Y. \\n\\nA--4). Thanks for your suggestion. We have made revision on the introduction and section 3.1 for clarity. \\n\\nA--5). Our proposed ACCA do not need any assumptions to ensure existence.\", \"references\": \"[1]. D.P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013.\\n[2]. Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.\\n[3]. A. Makhzani, J. Shlensand, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. arXiv:1511.05644, 2015\"}",
"{\"title\": \"Response to Reviewer#1 (Part 1): Thanks for your valuable comments. But there seems to be some misunderstanding on our work and some important details may also be overlooked. We detailedly answered your concerns with the following response.\", \"comment\": \"Thank you for the constructive comments and suggestions on our work. We answer your concerns on the proposed ACCA in the following aspects:\\n1). Our proposed ICCA studies multi-view learning from the generative perspective. (A1)\\n2). The CMI is a theoretically sound criteria to satisfy the requirement for CCA, based on our intention to capture nonlinear dependency with implicit distributions. (A1, A3)\\n3). The feasibility of ICCA and ACCA can be supported by representative works on Deep Generative Models, e.g. VAEs and AAEs. ( A2, A3, A4, A5)\\n\\nBelow, we answer each of your questions detailedly. \\n\\nA--1). Your concerns on our metric can be answered with two key points of our motivation: (1). We aim to interpret nonlinear CCA models from the generative perspective; (2). We aim to relax the explicit distribution assumptions on the data for these CCA modeling. \\n\\nFrom what we understand here, you interpret the main objective of CCA from the discriminative perspective, which is deviating from our motivations. From the generative perspective, the objective of CCA can be interpreted as learning a compact set of the shared latent variables z that represent a distribution over the observed two-view data x and y (as depicted in Figure1). For this generative model, the latent variable z is to be inferred, instead of \\u201cgiven\\u201d as in your understanding. \\n\\nActually, $I(f(X), g(Y))$ has been adopted as a metric to capture non-linear dependency (CIA in Table 1). However, explicit distributions are required for f(X) and g(Y), which are intractable to be estimated in these complex expressive nonlinear CCA models. This restricts the model to capture nonlinear dependency (Page 3, Line 1-4) . \\n\\nConsequently, we present $I(X;Y|Z)$ as the metric, which achieves the following benefits simultaneously: 1). It suits the generative interpretation for multi-view learning problem; 2). It can capture nonlinear dependency implicitly based on the proposed formulation. (Notes: To be detailedly explained with Q2 and Q3). \\n\\nNote that, although $I(X;Y|Z)$ is the criteria of the proposed ACCA model, the transformations, $f(X)$ and $G(Y)$, are already implicitly implemented in our network (Figure2). \\n\\nTherefore, the proposed $I(X;Y|Z)$ can be a sound criteria for generative interpretation of nonlinear CCA models.\\n\\n\\nA--2). As we aim to conduct CCA with implicit distributions, following the derivation of ELBO, we derive a surrogate for the proposed $I(X;Y|Z)$ criteria, to eliminate the explicit distribution requirement (Page 3, Line 1-4) for its estimation. As described in Section3.3, we prove that the optimization of Eq. (7) (refer to Eq. (6) in the original version) implicitly leads to $min I(X;Y|Z)$, through the derivation shown in the appendix. The presented derivation can be supported by the ELBO derivation in the variational inference [1]. \\n\\nA--3). Although differential entropy can be negative, the conditional mutual information is always nonnegative based on the Jensen's inequality [2]. Consequently, $ I(X;Y|Z) = 0$ will be the optimal value of the generative CCA problem. Therefore, we do not need to consider the *absolute value* here. \\n\\nA--4). As explained in Q2, in our paper, Eq.(6) and Eq.(7) (refer to Eq.(5) and Eq.(6) in the original version) are presented to show that connection between the presented $I(X;Y|Z)$ criteria and deduced surrogate objective of the proposed ICCA. (The proof is presented in the appendix) Practical models are instantiated with different approximate inference methods within the ICCA framework. In these instantiations, the model parameters are what we learn. (Eq.4, Eq.8 and Eq.9).\\n\\n\\nA--5). As stated in section 4.1, our proposed problem is challenging in two aspects: 1). we study CCA with implicit distributions; 2) we intend to handle task which requires high precision of alignment. Existing methods fail in these two cases. \\n\\nAs illustrated in [3], adversarial training criterion can regularize the aggregated posterior distribution of the latent representation of the autoencoder to arbitrary prior distributions. This kind of approximate inference technique achieves two properties. (1). It allows $q(z|x)$ to act as a deterministic function of x, without explicit assumptions on the posterior distributions. (2). As the technique drives the approximation of the aggregated posterior to the prior, it achieves a compact latent space in which samples generated from any part of the latent space would be meaningful.\\n\\nConsequently, in the proposed ACCA, we adopt the adversarial training criterion on the multi-view encodings and adopt a shared discriminator to drive the approximation of these encodings simultaneously. This design enables ACCA to be superior to VCCA in two aspects. (1). ACCA can handle CCA problem with implicit distributions. (2). As ACCA drives the approximation of the three aggregated posteriors to the prior distribution (Eq. 11), it overcomes the misaligned encoding problem in Bi-VCCA (Section 4.2).\"}",
"{\"title\": \"Interesting idea but could need more polishing\", \"review\": \"In this paper, the authors attempt to provide a perspective on CCA that is based on implicit distributions. The authors compare and discuss several variants on CCA that have been proposed over the years, ranging from Linear CCA to Deep CCA and autoencoder variants. In order to overcome the prior/likelihood distribution assumptions, the authors propose a CCA view that is based on learning implicit distributions, e.g, by using generative adversarial networks. The authors further motivate their work by comparing with (Bi-)VCCA, claiming that the underlying assumptions lead to inconsistent constraints (or idealistic). I think the work has merit, and I like the motivation. Nevetheless, I think stronger experiments are required, as well as improvements in terms of clarity in the writing of the paper, and stronger support for the motivation. Figure 2 should be better explained in text. The MNIST experiment is useful, but using GANs usually results in sharper images than say VAE. Also, comparisons with (i) other models besides Bi-VCCA, and (ii) on other multi-view real-world data (besides the MNIST_LR) would be very useful in terms of communicating the true benefits of this model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"More explanation of the method can improve the paper\", \"review\": \"This paper proposes to improve deep variational canonical correlation analysis (VCCA, Bi-VCCA) by 1) applying adversarial autoencoders (Makhzani et al. ICLR 2016) to model the encoding from multiple data views (X, Y, XY) to the latent representation (Z); and 2) introducing q(z|x,y) to explicitly encode the joint distribution of two views X,Y. The proposed approach, called adversarial canonical correlation analysis (ACCA), is essentially the application of adversarial autoencoder to multiple data views. Experiments on benchmark datasets, including the MNIST left right halved dataset, MNIST noisy dataset, and Wisconsin X-ray microbeam database, show the proposed ACCA result in higher dependence (measured by the normalized HSIC) between two data views compared to Bi-VCCA.\\n\\nThis paper is well motivated. Since adversarial autoencoder aims to improve based on VAE, it's natural to make use of adversarial autoencoder to improve the original VCCA. The advantage of ACCA is well supported by the experimental result.\\n\\nIn ACCA_NoCV, does the author use a Gaussian prior? If so, could the author provide more intuition to explain why ACCA_NoCV would outperform Bi-VCCA, which 1) also use a Gaussian prior; and 2) also does not use the complementary view XY? Why would adversarial training improve the result?\\n\\nIn ACCA, does the form of the prior distribution have to be specified in advance, such as Gaussian or the Gaussian mixture? Are the parameters of the prior learned during the training?\\n\\nWhen comparing the performance of different models, besides normalized HSIC, which is a quite recent approach, does the author compute the log-likelihood on the test set for Bi-VCCA and different variants of ACCA? Which model can achieve the highest test log-likelihood?\\n\\nAccording to equation (6), in principle, only q(z|x,y) is needed to approximate the true posterior distribution p(z|x,y). Did the author try to remove the first two terms in the right hand side of Equation (11), i.e., the expectation w.r.t. q_x(z) and q_y(z), and see how the model performance was affected?\\n\\nDoes adversarial training introduce longer training time compared to the Bi-VCCA?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper does not unambiguously describe the proposed model and algorithm. In the present form, the ICCA framework is not an approach to multi-view learning as it does not construct any transformations of the views. I explain this statement in the review below and, overall, can not recommend this paper for accp.\", \"review\": \"I don't quite see how the proposed approach addresses non-linear canonical correlation analysis. In particular:\\n\\n1) The main motivation is the minimization of the conditional mutual information I(X;Y|Z), where X and Y correspond to the two views and Z is latent. First of all, what uncertainty does this expression has when X and Y are observations and Z is given? My understanding is that the main objective of any CCA problem should be to find some transformations, say f(X) and g(Y), with some (to be defined) desirable properties. For example, these would correspond to linear transformations, say Ax and By, for classical CCA. Therefore, should not one be interested in minimizing something like I(f(X);g(Y)|Z)?\\n\\n2) Assuming that the minimization of the conditional mutual information I(X;Y|Z) would be the goal, I don't quite see why the formulation in equation (6) would actually be equivalent (or be some reasonable approximation)? \\n\\n3) It is well known that differential entropy can be negative (e.g., Cover and Thomas, 2006). Why would the conditional mutual information in equation (4) be non-negative? Alternatively, what would negative values of I(X;Y|Z) mean in the CCA context? My understanding is that one should be interested in minimizing I(X;Y|Z), or its variants with transformations, in *absolute value* to ensure some closeness to conditional independence.\\n\\n4) Expressions in equation (5)-(6) are general and hold with no assumptions whatsoever for any random variables X, Y, Z (given the expectations/integrals exist). It is therefore not clear what are the variables of this minimization problem? (parameters? but what is the parametric model?)\\n\\n5) Assuming solving (6) is the goal, this formulation as mentioned by the authors is actually is quite a challenging problem involving latent variables. Some form of this approach explanation would \\n\\nI can not quite see how the proposed adversarial version would correct or supplement any of these questions.\", \"other_comments\": \"1) It would be appropriate to cite the probabilistic CCA paper by Bach and Jordan (2005); a better citation for classical CCA would be Hotelling (1936).\\n\\n2) I find the multiple mentioning of the *non-linear* (in-)dependence confusing. Is this in statistical sense? And how exactly is this related to CCA? Does it have anything to do with the fact that the third and higher order cumulants are zero only for independent variables unless they are Gaussian? Moreover, does this linear independence have any connection with the non-linearity of the proposed CCA approach?\\n\\n3) What exactly is the *linear correlation criterion* and how does it enter the classical CCA or PCCA formulation (Introduction; bullet point 2)?\\n\\n4) It would be helpful to introduce the original CCA problem emphasizing that each view, X and Y, are *different* linear transformation of *the same* latent codes z. Moreover, the full description of the models (classical CCA/ PCCA) wouldn't take more than one-two paragraphs and would help the readers to avoid any misunderstanding.\\n\\n5) Are any assumptions necessary to ensure existence?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SJzR2iRcK7 | Multi-class classification without multi-class labels | [
"Yen-Chang Hsu",
"Zhaoyang Lv",
"Joel Schlosser",
"Phillip Odom",
"Zsolt Kira"
] | This work presents a new strategy for multi-class classification that requires no class-specific labels, but instead leverages pairwise similarity between examples, which is a weaker form of annotation. The proposed method, meta classification learning, optimizes a binary classifier for pairwise similarity prediction and through this process learns a multi-class classifier as a submodule. We formulate this approach, present a probabilistic graphical model for it, and derive a surprisingly simple loss function that can be used to learn neural network-based models. We then demonstrate that this same framework generalizes to the supervised, unsupervised cross-task, and semi-supervised settings. Our method is evaluated against state of the art in all three learning paradigms and shows a superior or comparable accuracy, providing evidence that learning multi-class classification without multi-class labels is a viable learning option. | [
"classification",
"unsupervised learning",
"semi-supervised learning",
"problem reduction",
"weak supervision",
"cross-task",
"learning",
"deep learning",
"neural network"
] | https://openreview.net/pdf?id=SJzR2iRcK7 | https://openreview.net/forum?id=SJzR2iRcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkg73eoVgE",
"HJxzjXAVy4",
"H1esn7VCA7",
"SJe-pjP7C7",
"SyxKXsPQ0X",
"BygCV9vXAm",
"HJg98_4jn7",
"r1g7C-lP3m",
"S1eTrxCosm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545019562775,
1543984026251,
1543549874811,
1542843321376,
1542843169276,
1542842934275,
1541257297552,
1540977098655,
1540247620913
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper760/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper760/Authors"
],
[
"ICLR.cc/2019/Conference/Paper760/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper760/Authors"
],
[
"ICLR.cc/2019/Conference/Paper760/Authors"
],
[
"ICLR.cc/2019/Conference/Paper760/Authors"
],
[
"ICLR.cc/2019/Conference/Paper760/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper760/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper760/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides a technique to learn multi-class classifiers without multi-class labels, by modeling the multi-class labels as hidden variables and optimizing the likelihood of the input variables and the binary similarity labels.\\n\\nThe majority of reviewers voted to accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting contribution to multi-class learning\"}",
"{\"title\": \"Good question\", \"comment\": \"Thanks for raising the nice discussion; yes the cluster assumption is related. In our opinion, the cluster assumption implies separability but additionally assumes that the data distribution has a higher density in a semantic category and a lower density between categories. Since our method is driven by the constraints (either given or learned), this does not necessarily have to be the case as long as there is enough information in the features to separate the categories. It could be an interesting future work to include such additional assumptions, for example by adopting a large margin criterion.\"}",
"{\"title\": \"The authors have clarified basically all my concerns\", \"comment\": \"Only a minor question: isn't \\\"separability of semantic categories\\\" the cluster assumption in semi-supervised learning, also known as the global consistency, one of the three famous consistencies in semi-supervised learning (the other two are local consistency and perturbation consistency)?\"}",
"{\"title\": \"Discussion of potential extensions\", \"comment\": \"We thanks the reviewer for raising the discussions of potential extensions and the nice comment of saying this work is impressive.\\n\\n[Q1]: \\\"The work is a special case of density estimation problems in Statistics, with a use of conditional independence assumptions to learn the joint distribution of nodes. While the work appears to be impressive, such ideas have typically been used in Statistics and machine learning very widely over the years(Belief Propagation, Topic modeling with anchor words assumptions etc...).\\\"\\n\\nWe appreciate reviewers feedbacks from a high-level view of statistics and machine learning. Our main novelty, the reversed classifier encapsulation scheme, is derived from several concepts in the field. However, the way we combine these concepts is new, which includes a probabilistic graphical model, a non-linear discriminative model (deep neural networks), and parameter learning through a simplified likelihood. The effort of combining these ideas is for relaxing the requirement of the supervision in training a discriminative model. This is a different goal than topic modeling, which is often posed as an unsupervised learning problem. \\n\\n[Q2]: \\\"The hard classification rule in the paper seems to be too restrictive to be of use in practical scenarios, and soft classification would be a useful pragmatic alternative. \\\"\\n\\nRegarding the concern of the hard classification setting, we agree that in a certain application, such as document classification or community detection, that assigning a document or a person to only a single class/community could be restrictive for that purpose. However, there is also a large number of applications where this is not restrictive, e.g. in computer vision applications classifying an image or a pixel to only a class is one of the most common scenarios. Thus the concern of restrictiveness is application-dependent.\\n\\n[Q3]: \\\"This work could be easily extended to multi-class classifications where each node belongs to multiple classes. It would be interesting to know the authors' thoughts on that.\\\"\\n \\nWhile our work focuses only on the problem of vanilla multi-class classification (hard classification, each sample can only belong to one class), it is interesting to discuss the possibility of extending it to a multi-label problem, where each sample can belong to multiple classes (labels). This would be interesting future work, but is not trivial because our problem reduction strategy requires a model to output a categorical distribution (has a sum of 1) so that the inner product of two categorical distributions (given two samples) represents the probability of being in the same class. In contrast, allowing a sample to belong to multiple classes means the sum of the model\\u2019s outputs can be as large as C (the number of classes) instead of 1; therefore the multi-label setting is not compatible with our problem reduction strategy as is and would require non-trivial modification.\"}",
"{\"title\": \"Discussion of the assumptions\", \"comment\": \"Thank you for the nice comment. We appreciate your effort in helping improve this work.\\n\\n[Q1]: \\\"First, the authors didn't discuss the underlying assumption of the proposed method except the additional independence assumption. I think there should be more underlying assumptions. \\u2026 does the \\\"cluster assumption\\\" play a role in it?\\\"\\n\\nWhile we agree that we do have some underlying assumptions, it is different than the clustering one. For example, we assume separability of semantic categories. This means that when the constraints are given (supervised learning), there is sufficient information (in the features) to separate or to group the samples. In the case of no given constraints (unsupervised or semi-supervised learning), there is also sufficient information to estimate the pairwise similarity. However, these are common assumptions that are inherent in discriminative models. We added Appendix D.2 to include the above discussions. \\n\\n[Q2]: \\u201cSecond, there are too many abbreviations without full names, and some of them seem rather important such as KLD and KCL.\\u201d and \\u201cdue to the writing style, it is hard to analyze which part in Section 4 is novel and which part is already known.\\u201d\\n\\nThanks for pointing out the writing issues such as the use of abbreviations and the claims of novelty. We fixed the abbreviations and added extra lines at the beginning of Section 4 and related work to clarify the novel parts. We also made an update to add the mentioned ICML paper into the related works. \\n\\n[Q3]: In order to derive Eq. (2), the authors imposed an additional independence assumption: given X_i and X_j, S_{i,j} is independent of all other S_{i',j'}. Hence, Eqs. (2) and (3) approximately hold instead of exactly hold. Some comments should be given on how realistic this assumption is, or equivalently, how close (1) and (3) are. \\n\\nFor the supervised learning case (Section 4.1 with results in Section 5.2), where dense ground truth constraints are available, the global solution of our likelihood is also the solution for the original likelihood. This is because if an instance is misclassified, then it will break some pair-wise constraints in both likelihoods and no longer be optimal.\\n\\nOf course, in practice, there could be two issues. First, the optimization methods for more complex models (e.g. stochastic gradient descent) may find local minima. Although it is hard to show theory for this in the general case, where local optima may be found, in such cases our visualization of the loss landscape (see Appendix A) provides some evidence that our method has a landscape that reduces poor local minima compared to prior work (KCL (Hsu et al., 2018)). The second potential issue is when constraints may be noisy. In such cases, for example, if the noise is high and there is a dependency structure to be leveraged, jointly optimizing across many or all constraints with the original likelihood may provide additional performance (at the expense of tractability). In practice, noisy constraints actually occur in our cross-task transfer learning experiments where our similarity prediction has significant errors (e.g. in Table 3 ImageNet experiments the similar pair precision, similar pair recall, dissimilar pair precision, and dissimilar pair recall are 0.812, 0.655, 0.982, and 0.992 respectively). The strong performance in terms of classification accuracy for the cross-task transfer experiments (Tables 2 and 3) shows that our simplification is robust to noise.\\n\\nOverall, the fact that we have demonstrated our method on five image datasets and three application scenarios (Section 5.2 for supervised learning, 5.3 for unsupervised cross-task transfer learning, and 5.4 for semi-supervised learning) empirically support that the proposed likelihood can overcome these two issues. It would be interesting future work to develop methods that can incorporate constraints jointly, however. We added Appendix D.1 to include the above discussions. \\n\\n[Q4]: \\\"One more minor concern: why P(X) appears in (1) and then disappears in (2) and (3) when Y is marginalized?\\\"\\n \\nThe reason we omit the P(X) in equation (2) and (3) is because the Xs are observed leaf nodes which do not affect the optimization of the likelihood. We have clarified this in the revision.\"}",
"{\"title\": \"Elaboration\", \"comment\": \"Thank you for your insightful comments. We appreciate your acknowledgment of the novelty, and we are glad to elaborate on the two concerns.\\n\\n[Q1]: \\\"Such an assumption seems too simple to be useful in problems with complicated dependence structure \\u2026 a careful discussion (if possible, theoretical) of when such an assumption is viable and when it is an oversimplification is necessary\\\"\\n\\nFor the supervised learning case (Section 4.1 with results in Section 5.2), where dense ground truth constraints are available, the global solution of our likelihood is also the solution for the original likelihood. This is because if an instance is misclassified, then it will break some pair-wise constraints in both likelihoods and no longer be optimal.\\n\\nOf course, in practice, there could be two issues. First, the optimization methods for more complex models (e.g. stochastic gradient descent) may find local minima. Although it is hard to show theory for this in the general case, where local optima may be found, in such cases our visualization of the loss landscape (see Appendix A) provides some evidence that our method has a landscape that reduces poor local minima compared to prior work (KCL (Hsu et al., 2018)). The second potential issue is when constraints may be noisy. In such cases, for example, if the noise is high and there is a dependency structure to be leveraged, jointly optimizing across many or all constraints with the original likelihood may provide additional performance (at the expense of tractability). In practice, noisy constraints actually occur in our cross-task transfer learning experiments where our similarity prediction has significant errors (e.g. in Table 3 ImageNet experiments the similar pair precision, similar pair recall, dissimilar pair precision, and dissimilar pair recall are 0.812, 0.655, 0.982, and 0.992 respectively). The strong performance in terms of classification accuracy for the cross-task transfer experiments (Tables 2 and 3) shows that our simplification is robust to noise.\\n\\nOverall, the fact that we have demonstrated our method on five image datasets and three application scenarios (Section 5.2 for supervised learning, 5.3 for unsupervised cross-task transfer learning, and 5.4 for semi-supervised learning) empirically support that the proposed likelihood can overcome these two issues. It would be interesting future work to develop methods that can incorporate constraints jointly, however.\\n\\nWe added Appendix D.1 to include the above discussions. \\n\\n[Q2]: \\\"Secondly, by using co-occurrence patterns, one throws away identifiability---the (latent) labels are only learnable up to a permutation unless external information is available. This point is not made clear in the paper ...\\\"\\n\\nTo address the concern about the identifiability of clusters, we slightly augment the second paragraph of Section 5.1 with additional references. In summary, for the supervised classification experiments, we use the Hungarian assignment algorithm to assign clusters to labels given the ground truth information (this is commonly used to evaluate clustering algorithms, e.g. see (Yang et al., 2010)). When labels are not available (e.g. in cross-task transfer learning) we only do this type of assignment for quantitative evaluation purposes. \\n\\nWe again thank the reviewer\\u2019s effort for improving this work.\"}",
"{\"title\": \"The paper introduces some novel ideas but lacks elaborate justification.\", \"review\": \"In this paper the authors revisit the problem of multi-class classification and propose to use pairwise similarities (more accurately, what they use is the co-occurrence pattern of labels) instead of node labels. Thus, having less stringent requirements for supervision, their framework has broader applicability: in supervised and semi-supervised classification and in unsupervised cross-task transfer learning, among others.\", \"pros\": \"The idea of using pairwise similarities to enable a binary classifier encapsulate a multi-class classifier is neat.\", \"cons\": \"My main gripe is with the conditional independence assumption on pairwise similarities, which the author use to simplify the likelihood down to a cross-entropy. Such an assumption seems too simple to be useful in problems with complicated dependence structure. Yes, the authors conduct some experiments to show that their algorithms achieve good performance in some benchmark datasets, but a careful discussion (if possible, theoretical) of when such an assumption is viable and when it is an oversimplification is necessary (analogous assumptions are used in naive Bayes or variational Bayes for simplifying the likelihood, but those are much more flexible, and we know when they are useful and when not).\\n\\nSecondly, by using co-occurrence patterns, one throws away identifiability---the (latent) labels are only learnable up to a permutation unless external information is available. This point is not made clear in the paper, and the authors should describe how they overcome this in their supervised classification experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very good paper\", \"review\": \"This paper proposed how to learn multi-class classifiers without multi-class labels. The main idea is shown in Figure 2, to regard the multi-class labels as hidden variables and optimize the likelihood of the input variables and the binary similarity labels. The difference from existing approaches is also illustrated in Figure 1, namely existing methods have binary classifiers inside multi-class classifiers while the proposed method has multi-class classifiers inside binary classifiers. The application of this technique to three general problem settings is discussed, see Figure 3.\", \"clarity\": \"Overall, it is very well written. I just have two concerns.\\n\\nFirst, the authors didn't discuss the underlying assumption of the proposed method except the additional independence assumption. I think there should be more underlying assumptions. For example, by the definition P(S_{i,j}=0 or 1|Y_i,Y_j) and the optimization of L(theta;X,S), does the \\\"cluster assumption\\\" play a role in it? The cluster assumption is popular in unsupervised/semi-supervised learning and metric learning where the X part of training data is in a form of pairs or triples. However, there is no such an assumption in the original supervised multi-class learning. Without figuring out the underlying assumptions, it is difficult to get why the proposed method works and when it may fail.\\n\\nSecond, there are too many abbreviations without full names, and some of them seem rather important such as KLD and KCL. I think full names of them should be given for the first time they appear. This good habit can make your audience more broad in the long run.\", \"novelty\": \"As far as I know, the proposed approach is novel. It is clear that Section 3 is original. However, due to the writing style, it is hard to analyze which part in Section 4 is novel and which part is already known. This should be carefully revised in the final version. Moreover, there was a paper in ICML 2018 entitled \\\"classification from pairwise similarity and unlabeled data\\\", in which binary classifiers can be trained strictly following ERM without introducing the cluster assumption. The same technique can be used for learning from pairwise dissimilarity and unlabeled data as well as from pairwise similarity and dissimilarity data. I think this paper should be included in Section 2, the related work.\", \"significance\": \"I didn't carefully check all experimental details but the experimental results look quite nice and promising. Given the fact that the technique used in this paper can be applied to many different tasks in machine learning ranging from supervised learning to unsupervised learning, I think this paper should be considered significant.\\n\\nNevertheless, I have a major concern as follows. In order to derive Eq. (2), the authors imposed an additional independence assumption: given X_i and X_j, S_{i,j} is independent of all other S_{i',j'}. Hence, Eqs. (2) and (3) approximately hold instead of exactly hold. Some comments should be given on how realistic this assumption is, or equivalently, how close (1) and (3) are. One more minor concern: why P(X) appears in (1) and then disappears in (2) and (3) when Y is marginalized?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lacks citation to similar works in Statistics, topic modelling ...\", \"review\": \"The work is a special case of density estimation problems in Statistics, with a use of conditional independence assumptions to learn the joint distribution of nodes. While the work appears to be impressive, such ideas have typically been used in Statistics and machine learning very widely over the years(Belief Propagation, Topic modeling with anchor words assumptions etc...). This work could be easily extended to multi-class classifications where each node belongs to multiple classes. It would be interesting to know the authors' thoughts on that. The hard classification rule in the paper seems to be too restrictive to be of use in practical scenarios, and soft classification would be a useful pragmatic alternative.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1fA3oActQ | GraphSeq2Seq: Graph-Sequence-to-Sequence for Neural Machine Translation | [
"Guoshuai Zhao",
"Jun Li",
"Lu Wang",
"Xueming Qian",
"Yun Fu"
] | Sequence-to-Sequence (Seq2Seq) neural models have become popular for text generation problems, e.g. neural machine translation (NMT) (Bahdanau et al.,2014; Britz et al., 2017), text summarization (Nallapati et al., 2017; Wang &Ling, 2016), and image captioning (Venugopalan et al., 2015; Liu et al., 2017). Though sequential modeling has been shown to be effective, the dependency graph among words contains additional semantic information and thus can be utilized for sentence modeling. In this paper, we propose a Graph-Sequence-to-Sequence(GraphSeq2Seq) model to fuse the dependency graph among words into the traditional Seq2Seq framework. For each sample, the sub-graph of each word is encoded to a graph representation, which is then utilized to sequential encoding. At last, a sequence decoder is leveraged for output generation. Since above model fuses different features by contacting them together to encode, we also propose a variant of our model that regards the graph representations as additional annotations in attention mechanism (Bahdanau et al., 2014) by separately encoding different features. Experiments on several translation benchmarks show that our models can outperform existing state-of-the-art methods, demonstrating the effectiveness of the combination of Graph2Seq and Seq2Seq. | [
"Neural Machine Translation",
"Natural Language Generation",
"Graph Embedding",
"LSTM"
] | https://openreview.net/pdf?id=B1fA3oActQ | https://openreview.net/forum?id=B1fA3oActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lXfSzegV",
"r1eUJP_5CQ",
"ByeI1Lu5A7",
"SJe-N_3FCQ",
"SylVFvhFCm",
"B1lcy_HX6X",
"H1lvXNF93X",
"SyeOW2Sch7",
"r1egbGCth7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544721674744,
1543304926191,
1543304670391,
1543256104775,
1543255932346,
1541785570435,
1541211166544,
1541196800173,
1541165560085
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper759/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper759/Authors"
],
[
"ICLR.cc/2019/Conference/Paper759/Authors"
],
[
"ICLR.cc/2019/Conference/Paper759/Authors"
],
[
"ICLR.cc/2019/Conference/Paper759/Authors"
],
[
"ICLR.cc/2019/Conference/Paper759/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper759/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper759/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper759/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a new method for graph representation in sequence-to-sequence models and validates its results on several tasks. The overall results are relatively strong.\\n\\nOverall, the reviewers thought this was a reasonable contribution if somewhat incremental. In addition, while the experimental comparison has greatly improved from the original version, there are still a couple of less satisfying points: notably the size of the training data is somewhat small. In addition, as far as I can tell all comparisons with other graph-based baselines actually aren't implemented in the same toolkit with the same hyperparameters, so it's a bit difficult to tell whether the gains are coming from the proposed method itself or from other auxiliary differences.\\n\\nI think this paper is very reasonable, and definitely on the borderline for acceptance, but given the limited number of slots available at ICLR this year I am leaning in favor of the other very good papers in my area.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Interesting method, if somewhat incremental. Experiments are reasonable but variables potentially not controlled.\"}",
"{\"title\": \"Response to Area Chair1\", \"comment\": \"Thank you for your valuable comments and suggestions.\\n\\nQ1. The weak baseline. \\nFor the Seq2Seq baseline, we report the BLEU scores of 23.87 and 22.31, which are from the HarvardNLP work [ref-1]. Because they focus on the beam-search optimization, the non-fine-tuning results in the low BLEU scores. After fine-tuning Seq2Seq, we get the BLEU scores of 28.79 and 26.90 on beam and greedy search respectively. We also compare with the NAACL18 work [ref-2], which is still lower than our GraphSeq2Seq by 0.48. Compared to Seq2Seq, we mainly show that Graph2Seq can be used to improve the performance of Seq2Seq. This also reveals the effectiveness of our GraphSeq2Seq since it is a reasonable combination of Graph2Seq and Seq2Seq. All results are updated in the revision. \\n\\nQ2. Fairness for Graph2Seq. \\nTo fairly compare with Bastings et al. [ref-3] which also fuses graph information for the NMT task, we set the same settings to do experiments. Our GraphSeq2Seq gets 40.2 (BLEU1) and 11.11 (BLEU4) on WMT16 English-Czech dataset, whereas the results of [ref-3] are 38.8 (BLEU1) and 9.6 (BLEU4). That means GraphSeq2Seq increases the BLEU scores by 1.4 (BLEU1) and 1.51 (BLEU4) than Bastings et al. [ref-3]. The experiment details are shown in Section 4.4.\\nMoreover, NIPS\\u201918 [ref-4] also considers Graph Representations in NMT task. By revising our vocab size to 50,000 trying to fairly compare with [ref-4], we get the BLEU scores of 29.47 and 27.90 on beam and greedy search on IWSLT2014 German-English dataset, which are slightly higher by 0.13 and 0.27 than the result reported by [ref-4].\\n\\nQ3. Quantitative analysis of how GraphSeq2Seq outperforms Graph2Seq and Seq2Seq. \\nWe do two experiments to show the quantitative analysis of our GraphSeq2Seq. \\n1) The first experiment is used to show the impact of the sequential encoder in the Graph2Seq [ref-5] based on three datasets. To a fair qualitative comparison, we directly use the code of the Graph2Seq [ref-5]. By adding the bidirectional sequence encoder into this code, our GraphSeq2Seq gets the improvements of 8.35 and 9.17 on beam and greedy search on IWSLT2014 German-English dataset, 4.93 and 5.7 on IWSLT2014 English-German dataset, 4.89 and 5.78 on IWSLT2015 English-to-Vietnamese dataset, 4.2 (BLEU1) and 2.89(BLEU4) on WMT2016 English-to-Czech dataset. This significant improvement definitely verifies the effectiveness of the sequential encoder. Furthermore, based on Seq2Seq, adding the sub-graph encoder gets the improvements of 1.87 and 2.16 on beam and greedy search on IWSLT2014 German-English dataset, 4.73 and 4.76 on IWSLT2014 English-German dataset, 3.52 and 2.98 on IWSLT2015 English-to-Vietnamese dataset, 2.0 (BLEU1) and 1.18(BLEU4) on WMT2016 English-to-Czech dataset. This significant improvement definitely verifies the effectiveness of the sub-graph encoder. The results of this experiment are reported in Tables 1, 2, 3, and 4.\\n2) The second experiment is used to verify the quantitative analysis of our GraphSeq2Seq with random graph and sequence noises based on IWSLT2014 German-English dataset. As shown in Table 2, the random noises change from 0% to 75%, where 75% indicates that 75% of the graph and sequence information are noises. 100% is not performed because it is meaningless in real life. Table 2 shows that the BLEU scores go to bad from 29.06 (Greedy) and 30.66 (Beam) to 17.38 and 20.28 when the sequence noise varies from 0% to 75%. For the graph noise, we have a similar observation that the BLEU scores go to bad from 29.06 (Greedy) and 30.66 (Beam) to 24.19 and 26.08 when the graph noise varies from 0% to 75%. It demonstrates that both graph and sequence information are effective in our GraphSeq2Seq, and the performance relies on their qualities. The detailed experiment is shown in Table 5 in the revision.\\n\\n[ref-1] Sam Wiseman and Alexander M. Rush. Sequence-to-sequence learning as beam-search optimization. In Proc. EMNLP, pp. 1296\\u20131306, 2016.\\n[ref-2] Wenhu Chen, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li, Ming Zhou: Generative Bridging Network for Neural Sequence Prediction. NAACL-HLT 2018: 1706-1715.\\n[ref-3] Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima\\u2019an. Graph convolutional encoders for syntax-aware neural machine translation. In Proc. EMNLP, pp. 1957\\u20131967, 2017.\\n[ref-4] Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He. Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models, NIPS, 2018.\\n[ref-5] Daniel Gildea, Zhiguo Wang, Yue Zhang, and Linfeng Song. A graph-to-sequence model for amr- to-text generation. In Proc. ACL, pp. 1616\\u20131626, 2018.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your valuable comments and suggestions.\\n\\nQ1. The clarity of section 3.3. \\nIn section 3.3, we mainly present a variant of the encoder part of our GraphSeq2Seq. As shown in Fig.2, after the sub-graph encoder, we get the hidden feature for each node. Then we rebuild the sub-graph for the current node. For the rebuilt sub-graph, its outgoing hidden feature is the input of a Graph_Out Bi-LSTM, while its incoming hidden feature is for a Graph_in Bi-LSTM. For the current node, its node representation is utilized for a Node Sequence Bi-LSTM. \\nThe main difference is that the original model is to encode the concated representation of the sub-graph by using only one Bi-LSTM, while the variant model leverages three Bi-LSTMs to respectively encode the specific representations of the sub-graph, including the incoming feature, the outgoing feature, and the node representation.\\nWe clarify it in the revision and revise Fig. 2 to illustrate it.\\n\\nQ2. State-of-the-arts with BPE. \\nThanks for your suggestion. We compare our model with NMPT and NMPT+LM, which are proposed by ICLR18 paper [ref-4]. Both our model and the compared method utilize the same settings and do not use BPE, so they are comparable. We think it can be acceptable. We also would like to compare with Edunov et al. [ref-1] and Deng et al. [ref-2]. However, they utilized BPE on both the source side and target side, whereas our GraphSeq2Seq cannot apply BPE on the source side. Thus, we do not choose this kind of methods for comparison. We will cite and discuss these works [ref-1,ref-2] in the revision. \\nFortunately, we add a comparison with Bastings et al. [ref-3], which utilizes BPE on only the target side. We get 40.2 (BLEU1) and 11.11 (BLEU4) on WMT16 English-Czech dataset, which improves the BLEU scores by 1.4 (BLEU1) and 1.51 (BLEU4) than Bastings et al. [ref-3]. This improvement verifies the effective performance of the proposed method on the top-line NMT baselines. The experiment details are shown in Section 4.4.\\n\\nQ3. How much it depends on the quality of the dependency parse. \\nWith regard to the impact of the dependency parse on performance, we add an experiment to discuss it. We randomly add some noise to the parsing result, and then train our model. We find the BLEU scores go to bad when the parsing result contains more noises. That means the BLEU scores degrade when considering languages with less good dependency parsers. More experiments are shown in the Response of Q3 for AC comments and Table 5 in the revision.\\n \\n[ref-1] Sergey Edunov, Myle Ott, Michael Auli, David Grangier, Marc'Aurelio Ranzato. Classical Structured Prediction Losses for Sequence to Sequence Learning. NAACL-HLT 2018: 355-364.\\n[ref-2] Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, Alexander M. Rush: Latent Alignment and Variational Attention. NIPS 2018.\\n[ref-3] Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima\\u2019an. Graph convolutional encoders for syntax-aware neural machine translation. In Proc. EMNLP, pp. 1957\\u20131967, 2017.\\n[ref-4] Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. Towards neural phrase-based machine translation. In International Conference on Learning Representations, 2018.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your friendly and valuable comments.\\n\\nQ1. The number of highway layers. \\nFor Table 4, it shows the performance on the test set and it is training with different numbers of highway layers, i.e., it uses the hold out data set. Thus, we conclude that using 3 highway layers achieves the best performance on IWSLT-2014 German-English dataset.\\n\\nQ2. Poor grammar and missing parentheses. \\nWe carefully proofread our paper and try to improve the grammar for a good readability. We also fix the missing parentheses in Eqns. (6)-(9).\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your kind summarization and valuable comments.\\n\\nQ1. Using BPE in our experiments. \\nThanks for your suggestion again. We admit that the setting is slightly out of the current main stream of NMT research. To enrich our experiments, we accept your suggestion by adopting BPE to compare with the paper of Bastings et al. [ref-1] which also leverages BPE. We get 40.2 (BLEU1) and 11.11 (BLEU4) on WMT16 English-Czech dataset, which improves the BLEU scores by 1.4 (BLEU1) and 1.51 (BLEU4) than Bastings et al. [ref-1]. This improvement verifies the effective performance of the proposed method on the top-line NMT baselines. The experiment details are shown in Section 4.4.\\n\\nQ2. Large training datasets. \\nWe would like to evaluate our model on the large WMT dataset (the WMT2014 English-to-German) but it is still running. After getting the results, we will post the results here and release the full experiments in the final version. For the dataset, the compared methods including ICLR18 [ref-2], NIPS18 [ref-3], EMNLP16 [ref-4], ICML17 [ref-5] only utilize the IWSLT-2014 and 2015 datasets for NMT task. To academic research, we think our experiments on current four datasets from IWSLT-2014 German-to-English, IWSLT-2014 English-to-German, IWSLT-2015 English-to-Vietnamese, and WMT-2016 English-Czech can be acceptable. \\n\\n[ref-1] Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima\\u2019an. Graph convolutional encoders for syntax-aware neural machine translation. In Proc. EMNLP, pp. 1957\\u20131967, 2017.\\n[ref-2] Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. Towards neural phrase-based machine translation. In International Conference on Learning Representations, 2018.\\n[ref-3] Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He. Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models, NIPS, 2018.\\n[ref-4] Sam Wiseman and Alexander M. Rush. Sequence-to-sequence learning as beam-search optimization. In Proc. EMNLP, pp. 1296\\u20131306, 2016.\\n[ref-5] Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, and Douglas Eck. Online and linear- time attention by enforcing monotonic alignments. In Proc. ICML, pp. 2837\\u20132846, 2017.\"}",
"{\"title\": \"Clarification about comparison to baselines?\", \"comment\": \"This paper proposes a new method for incorporating graph structures in sequence-to-sequence models. The idea itself seems reasonable, but the comparison to the baseline is concerning.\\n\\nFirst, just looking at the normal comparison on IWSLT2014, the seq2seq and graph2seq baselines only achieve BLEU scores of 23.87 and 22.31 respectively, which is not at all competitive with the state-of-the-art. Just well-tuned sequence-to-sequence models can achieve a score of 29.10 on this dataset (see \\\"Generative Bridging Network for Neural Sequence Prediction\\\", NAACL 2018). I fear that the baselines here are too weak, and the comparison with them too indirect to really tell us anything about the merit of the proposed model.\\n\\nSecond, the graph2seq model used as a baseline was not designed as a method for MT, but rather for logical-form-to-text generation. A method such as that of Bastings et al., which was specifically designed for MT, seems to be a more fair comparison.\\n\\nThird, there is no quantitative analysis or qualitative comparison of why or how the proposed method outperforms the graph2seq baselines. It would be nice to know more about why the proposed methods are helping compared to other reasonable methods.\\n\\nI'd appreciate if the authors could clarify about these concerns in their response.\"}",
"{\"title\": \"Interesting idea\", \"review\": \"[Summary]\\nThis paper proposes a Graph-Sequence-to-Sequence (GraphSeq2Seq) model to fuse the dependency graph among words into the traditional Seq2Seq framework.\\n\\n\\n\\n[clarity]\\nThis paper is basically well written though there are several grammatical errors (I guess the authors can fix them).\\nMotivation and goal are clear.\\n\\n\\n[originality]\\nSeveral previous methods have already tackled to integrate graph structures into seq2seq models.\\nTherefore, from this perspective, this study is incremental rather than innovative.\\nHowever, the core idea of the proposed method, that is, combining the word representation, sub-graph state, incoming and outgoing representations seems to be novel.\\n\\n\\n\\n[significance]\\nThe experimental setting used in this paper is slightly out of the current main stream of NMT research.\\nFor example, the current top-line NMT systems uses subword unit for input and output sentences, but this paper doesn\\u2019t.\\nMoreover, the experiments were performed only on the very small datasets, IWSLT-2014 and 2015, which have at most 153K training parallel sentences.\\nTherefore, it is unclear whether the proposed method has essential effectiveness to improve the performance on the top-line NMT baselines.\\n\\nComparing on the small datasets, the proposed method seems to significantly improve the performance over current best results of NPMT+LM.\\n\\n\\n\\nOverall, I like the idea of utilizing sub-graphs for simplicity and saving the computational cost to encode a structural (grammatical or semantic) information.\\nHowever, I really wonder if this type of technique really works well on the large training datasets...\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Promising results, but use of dependency parser is somewhat concerning\", \"review\": \"This paper proposes a seq2seq model which incorporates dependency parse information from the source side by embedding each word's subgraph (according to a predetermined dependency parse) using the Graph2Seq model of Song et al. (2018); the authors propose two variants for achieving an encoded source-side representation from the subgraph embeddings, involving bidirectional lstms. The authors show that the proposed approach leads to good results on three translation datasets.\\n\\nThe paper is generally written fairly clearly, though I think the clarity of section 3.3 could be improved; it took me several reads to understand the architectural difference between this second variant and the original one. The results presented are also impressive: I don't think the IWSLT de-en results are in fact state of the art (e.g., Edunov et al. (NAACL 2018) and Deng et al. (NIPS 2018) outperform these numbers, though both papers use BPE, whereas I assume the current paper does not), but the results on the other two datasets appear to be.\\n\\nRegarding the approach in general, it would be nice to see how much it depends on the quality of the dependency parse. In particular, while we might expect the en-de and en-vi results to be good because dependency parsers for English are relatively good, how much does performance degrade when considering languages with less good dependency parsers?\", \"pros\": [\"Good results, fairly simple model\"], \"cons\": [\"Somewhat incremental, not clear how much method depends on quality of the dependency parser\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of \\\"GraphSeq2Seq: Graph-Sequence-to-Sequence for Neural Machine Translation\\\"\", \"review\": \"This paper proposes a method for combining the Graph2Seq and Seq2Seq models into a unified model that captures the benefits of both. The paper thoroughly describes in series of experiments that demonstrate that the authors' proposed method outperforms several of the other NMT methods on a few translation tasks.\\n\\nI like the synthesis of methods that the authors' present. It is a logical and practical implementation that seems to provide solid benefits over the existing state of the art. I think that many NMT researchers will find this work interesting.\\n\\nTable 4 begs the question, \\\"How does one choose the number of highway layers?\\\" I presume that the results in that table are from the test data set. Using the hold out data set, which number gives the best value?\\n\\nThe paper's readability suffers from poor grammar in some places. This fact may discourage some readers.\\n\\nThe authors should fix the missing parentheses in Eqns. (6)-(9).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HklAhi09Y7 | Question Generation using a Scratchpad Encoder | [
"Ryan Y Benmalek",
"Madian Khabsa",
"Suma Desu",
"Claire Cardie",
"Michele Banko"
] | In this paper we introduce the Scratchpad Encoder, a novel addition to the sequence to sequence (seq2seq) framework and explore its effectiveness in generating natural language questions from a given logical form. The Scratchpad encoder enables the decoder at each time step to modify all the encoder outputs, thus using the encoder as a "scratchpad" memory to keep track of what has been generated so far and to guide future generation. Experiments on a knowledge based question generation dataset show that our approach generates more fluent and expressive questions according to quantitative metrics and human judgments. | [
"Question Generation",
"Natural Language Generation",
"Scratchpad Encoder",
"Sequence to Sequence"
] | https://openreview.net/pdf?id=HklAhi09Y7 | https://openreview.net/forum?id=HklAhi09Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lJFxVgeN",
"H1x8HiTh1E",
"BJlxyt63kN",
"BJlwPu6hyE",
"H1euiaMah7",
"rkeUDe_i37",
"HJlWUOjV37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544728694946,
1544506173712,
1544505560098,
1544505438907,
1541381535982,
1541271645688,
1540827208835
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper758/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper758/Authors"
],
[
"ICLR.cc/2019/Conference/Paper758/Authors"
],
[
"ICLR.cc/2019/Conference/Paper758/Authors"
],
[
"ICLR.cc/2019/Conference/Paper758/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper758/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper758/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces a \\\"scratchpad\\\" extension to seq2seq models whereby the encoder outputs, typically \\\"read-only\\\" during decoding, are editable by the decoder. In practice, this bears quite a lot of similarity\\u2014if not in the general concept, then in the the implementation\\u2014to a variety of models proposed in the NLP community (see reviews for details). As the technical novelty of the paper is quite limited, and there are issues with the clarity both in the technical contribution and in presenting what exactly is the main contribution of the paper, I must concur with the reviewers and recommend rejection.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not enough novelty relative to neural coverage models\"}",
"{\"title\": \"We provide a conceptually simple method delivering state of the art performance by significantly outperforming standard methods for natural language generation.\", \"comment\": \"We are aware of the related work you mention. Please note that unfortunately the \\u201cSemantically Conditioned LSTM\\u2026\\u201d is not directly comparable because, as they state in their paper, \\u201cthe generator is further conditioned on a control vector d, a 1-hot representation of the dialogue act (DA) type and its slot-value pairs\\u201d. Our goal is to work with arbitrarily complex questions that map to correspondingly arbitrarily complex logical forms and not a very restricted set of logical forms that could be represented in a one-hot fashion.\\nPlease do note that we ran 2 sets of human evaluations (Adequacy and Fluency), as is standard in Machine translation in order to deal with the evaluation bias problem you describe - we took this into account when conducting experiments and will make it more clear in a revised version. We also observe significant improvements in both human evaluations, suggesting that the improvement comes from our method and not from evaluation bias.\\nOur dataset only contains a single logical form for each question and vice-versa, making it impossible to evaluate quantitative metrics (bleu, rouge, meteor) in the multi-reference setting you describe. Please also note that metrics like bleu and rouge have been commonly used in a non multi-reference setting by significant work in the natural language processing community. \\nWe thank the reviewer for their comments and will take them into account in a revised version.\"}",
"{\"title\": \"Our contribution is a conceptually simple method achieving state of the art performance\", \"comment\": \"Your interpretation of section 3 is exactly right. Thank you for suggesting additional experiments to better understand the behavior of the scratchpad component. We would like to note that beyond the gains across all evaluated quantitative metrics (bleu, rouge, meteor), our method shows substantial gains on human evaluations. In future work we propose to use our method to generate a large dataset and evaluate its performance.\\nWe don\\u2019t claim to be the first to generate questions from logical form, but the experiments within show that our approach is superior to standard approaches in the literature.\"}",
"{\"title\": \"Our method is simpler and delivers state of the art performance gains while being conceptually interesting\", \"comment\": \"We thank the reviewer for their comments and for noting correctly that our modification is quite effective, particularly regarding the large improvements on human evaluations. Our method is simpler in both conception and implementation than coverage, while requiring less parameters and being twice as likely to be chosen as better by human judges. We agree with the reviewer on the simplicity of our method, which we believe to be an asset. In addition to that, we believe the Scratchpad Encoder is fundamentally interesting as a mirror to the \\u2018attentive read\\u2019 common in seq2seq models. We also appreciate the reviewer taking their time to draw our attention to how to better emphasize the novelty and simplicity of our work.\"}",
"{\"title\": \"Interesting idea but not novel enough\", \"review\": \"Overall:\\nThis paper introduces the Scratchpad Encoder, a novel addition to the sequence to sequence (seq2seq) framework and explore its effectiveness in generating natural language questions from a given logical form. The proposed model enables the decoder at each time step to modify all the encoder outputs, thus using the encoder as a \\u201cscratchpad\\u201d memory to keep track of what has been generated so far and to guide future generation.\", \"quality_and_clarity\": \"-- The paper is well-written and easy to read. \\n-- Consider using a standard fonts for the equations.\", \"originality\": \"\", \"the_idea_of_question_generation\": \"using logical form to generate meaningful questions for argumenting data of QA tasks is really interesting and useful.\\nCompared to several baselines with a fixed encoder, the proposed model allows the decoder to attentively write \\u201cdecoding information\\u201d to the \\u201cencoder\\u201d output. The overall idea and motivation looks very similar to the coverage-enhanced models where the decoder also actively \\u201cwrites\\u201d a message (\\u201ccoverage\\u201d) to the encoder's hidden states.\\nIn the original coverage paper (Tu et.al, 2016), they also proposed a \\u201cneural network based coverage model\\u201d where they used a general neural network output to encode attention history, although this paper works differently where it directly updates the encoder hidden states with an update vector from the decoder. However, the modification is slightly marginal but seems quite effective. It is better to explain the major difference and the motivation of updating the hidden states.\\n\\n-------------------\", \"comments\": \"-- In Equation (13), is there an activation function between W1 and W2?\\n-- Based on Table 1, why did not evaluate the proposed model with beam-search?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper tackles the question generation problem from a logical form and proposes an addition called Scratchpad Encoder to the standard seq2seq framework. The new model has been tested on the WebQuestionsSP and the WikiSQL datasets, with both automatic and human evaluation, compared to the baselines with copy and coverage mechanisms.\", \"major_points\": \"Overall, I think this paper is not good enough for an ICLR paper and the presentation is confusing in both its contributions and its technical novelty. I don\\u2019t recommend to accept this paper, at least in the current format.\\n\\nThe paper states two major contributions (the last paragraph of Introduction), one is the new model Scratchpad Encoder, and the other is \\u201cpossible to generate a large high quality (SPARQL query, local form) dataset\\u201d. For the second contribution, there isn\\u2019t any evaluation/justification about the quality of the generated questions and how useful this dataset would be in any KB-QA applications. I believe that this paper is not the first one to study question generation from logical form (cf. Guo et al, 2018 as cited), so it is unclear what is the contribution of this paper in that respect.\\n\\nFor the modeling contribution, although it shows some improvements on the benchmarks and some nice analysis, the paper really doesn\\u2019t explain well the intuition of this \\u201cwrite\\u201d operation/Scratchpad (also the improvement of Scratchpad vs coverage is relatively limited). Is this something tailored to question generation? Why does it expect to improve on the question generation or it can improve any tasks which build on top of seq2seq+att framework (e.g., machine translation, summarization -- if some results can be shown on the most competitive benchmarks, that would be much more convincing)?\\n\\nIn general I find Section 3 pretty difficult to follow. What does \\u201ckeeping notes\\u201d mean? It seems that the goal of this model is to keep updating the encoder hidden vectors (h_0, .., h_T) instead of fixing them at the decoder stage. I think it is necessary to make it clearer how s_{post_read} and attn_copy are computed with the updated {h^i_t} and what u^i is expected to encode. \\\\alpha^i_t and u^i are also pretty complex and it would be good to conduct some ablation analysis.\", \"minor_points\": [\"tau Yih et al, 2016 --> Yih et al, 2016\", \"It is unclear why the results on WikiSQL is presented in Appendix. Combining the results on both datasets in the experiments section would be more convincing.\", \"Table 1: Not sure why there is only one model that employs beam search (with beam size = 2) among all the comparisons. It looks strange.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Generating questions is an interesting task, but it is a kind of natural language generation task and the paper does not consider and they have proposed very similar ideas already\", \"review\": \"The paper studies the problem of question generation from sparql queries. The motivation is to generate more training data for knowledge base question answering systems to be trained on. However, this task is an instance of natural language generation: given a meaning representation (quite often a database record), generate the natural language text correspoding to it. And previous work on this topic has proposed very similar ideas to the scratchpad proposed here in order to keep track of what the neural decoder has already generated, here are two of them:\\n- Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems\\nTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, Steve Young, EMNLP 2015: https://arxiv.org/abs/1508.01745\\n- Globally Coherent Text Generation with Neural Checklist Models\", \"chloe_kiddon_luke_zettlemoyer_yejin_choi\": \"https://aclweb.org/anthology/D16-1032\\nThus the main novelty claim of the paper needs to be hedged appropriately. Also, to demonstrate the superiority of the proposed method an appropriate comparison against previous work is needed.\", \"some_other_points\": [\"How is the linearization of the inout done? It typically matters\", \"Given the small size of the dataset, I would propose experimenting with non-neural approaches as well, which are also quite common in NLG.\", \"On the human evaluation: showing the gold standard reference to the judges introduces bias to the evaluation which is inappropriate as in language generation tasks there are multiple correct answers. See this paper for discussion in the context of machine translation: http://www.aclweb.org/anthology/P16-2013\", \"For the automatic evaluation measures there should be multiple references per SPARQL query since this is how BLEU et al are supposed to be used. Also, this would allow to compare the references against each other (filling in the missing number in Table 4) and this would allow an evaluation of the evaluation itself: while perfect scores are unlikely, the human references should be much better than the systems.\", \"In the outputs shown in Table 3, the questions generated by the scratchpad encoder often seem to be too general compared to the gold standard, or incorrect. E.g. \\\"what job did jefferson have\\\" is semntically related to his role in the declaration of independence but rather different. SImilarly, being married to someone is not the same as having a baby with someone. While I could imagine human judges preferring them as they are fluent, I think they are wrong as they express a different meaning than the SPARQL query they are supposed to express. What were the guidelines used?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
r1gR2sC9FX | On the Spectral Bias of Neural Networks | [
"Nasim Rahaman",
"Aristide Baratin",
"Devansh Arpit",
"Felix Draxler",
"Min Lin",
"Fred Hamprecht",
"Yoshua Bengio",
"Aaron Courville"
] | Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets easier with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions. | [
"deep learning theory",
"fourier analysis"
] | https://openreview.net/pdf?id=r1gR2sC9FX | https://openreview.net/forum?id=r1gR2sC9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xjnEG4eE",
"r1gA_UupkV",
"B1xDY8gQkV",
"Hklxk7ifJN",
"SJeMY2tbkV",
"Hyea0eZW1V",
"rylKTPtqCX",
"ByxMdFfmCQ",
"rJe2K_sW0Q",
"H1xuoxUj67",
"SJlUNA4i6m",
"SyxdkRevp7",
"ByeFBpev6X",
"Skx74sew6X",
"ByeCM8C8a7",
"HygwdiS037",
"SylaVMTh27",
"BkgJFfPt37"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544983730740,
1544550006285,
1543861886623,
1543840472476,
1543769209755,
1543733460864,
1543309248966,
1542822249737,
1542727812220,
1542312096195,
1542307373953,
1542028768135,
1542028608949,
1542028075197,
1542018582448,
1541458798819,
1541358133016,
1541137014956
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/Authors"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper757/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Real Data Consequence of the Spectral Bias\", \"comment\": \"Although we can\\u2019t update our submission anymore: following the reviewers' suggestion, we have ran a few experiments on MNIST to demonstrate the effect of spectral bias on real data. It involves evaluating the robustness of neural network training dynamics to noise of various frequencies.\\n\\nWe train the same 6-layer deep 256-unit wide network on MNIST images of classes \\u201c1\\u201d and \\u201c7\\u201d. The ground-truth label is +/- 1 for the respective classes, and the network is trained with full-batch gradient descent with MSE loss and Adam. From the plots found here [1], we make the following observations.\\n\\n(a) Adding a low frequency noise signal to labels degrades the generalization peformance (the difference between training and validation losses) to a much larger extent than high-frequency noise of the same amplitude.\\n\\n(b) The network is instantly able to fit the low frequency noise signal, causing the validation performance (w.r.t. clean validation labels without noise) to suffer. The training loss is small, as expected. In particular, observe that the validation performance is quite sensitive to the amplitude of the noise \\u2013 the only way to improve performance is by decreasing the noise amplitude.\\n\\n(c) High-frequency noise is only fit later in the training. This results in a dip in the validation score: this is around when the true (low-frequency) labels are learned. As the training progresses, higher frequencies are fit, which results in increasing validation loss but decreasing training loss. Further, observe that the validation performance around the dip is fairly robust to change in noise amplitude \\u2013 this is expected, since the amplitude of the high frequency noise shouldn\\u2019t affect the learning of the true target this early in the training.\\n\\n[1] https://imgur.com/a/pyMfCiL\"}",
"{\"metareview\": \"This paper considers an interesting hypothesis that ReLU networks are biased towards learning learn low frequency Fourier components, showing a spectral bias towards low frequency functions. The paper backs the hypothesis with theoretical results computing and bounding the Fourier coefficients of ReLU networks and experiments on synthetic datasets.\\n\\nAll reviewers find the topic to be interesting and important. However they find the results in the paper to be preliminary and not yet ready for publication. \\n\\nOn theoretical front, the paper characterizes the Fourier coefficients for a given piecewise linear region of a ReLU network. However the bounds on Fourier coefficients of the entire network in Theorem 1 seem weak as they depend on number of pieces (N_f) and max Lipschitz constant over all pieces (L_f), quantities that can easily be exponentially big. Authors in their response have said that their bound on Fourier coefficients is tight. If so then the paper needs to discuss/prove why quantities N_f and L_f are expected to be small. Such a discussion will help reviewers in appreciating the theoretical contributions more.\\n\\nOn experimental front, the paper does not show spectral bias of networks trained over any real datasets. Reviewers are sympathetic to the challenge of evaluating Fourier coefficients of the network trained on real data sets, but the paper does not outline any potential approach to attack this problem. \\n\\nI strongly suggest authors to address these reviewer concerns before next submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"I thank the authors for taking time and effort to address the issues raised.\\n\\nPersonally the synthetic toy data set experiments are informative and I agree with authors that since other work show some hints of universality of the spectral bias, there's utility of studying controlled settings. Therefore I am not concerned with experimental settings as other reviewers are.\"}",
"{\"title\": \"Comment to All Reviewers\", \"comment\": \"We thank all reviewers for their feedback. Most importantly, in the discussions and revision, we tried to clarify the contribution of Section 2 and its role in the paper. We have revised Appendix D.2.2 to make it more clear that we obtain a closed form expression for the Fourier transform. We reinforced the theoretical argument for the spectral bias by analyzing the bias of the MSE gradient (Eq 11). Following AnonReviewer3's suggestion, we have included a suite of ablation experiments (Appendix D.3), showing the effect of architecture and the Lipschitz constant on the spectral bias. We hope this improved clarity makes the significance of our contributions more apparent.\", \"main_contributions\": \"1. We describe a computation procedure for the Fourier transform and spectrum of deep ReLU networks.\\n\\n2. We show a learning bias of neural networks towards low frequency functions. We believe this might be an important insight towards explaining why neural networks tend to prioritize learning simple patterns that generalize across data samples [Arpit et al. 2017].\\n\\n3. We investigate the subtle interplay between the learnability of large frequencies and the geometry of the data manifold, pointing that a low frequency function in input space can fit large frequencies on highly curved manifolds.\\n\\nWe believe our work provides new insights - both theoretical and empirical - without making any strong assumptions, and in a context (the learning dynamics of deep ReLU networks) where very little is known.\"}",
"{\"title\": \"Thank you for your additions.\", \"comment\": \"Thank you for your additions. We feel they re-iterate concerns you had already made clear in the first version of your review, so it could be that we have been talking at cross-purposes.\\n\\n> Meanwhile, Lemma 1, \\\"exact characterization\\\", does not give any sense of how the slopes relate to weights of network. \\n\\nThe relation between slope and weights is given by Eq 3. Details are given in Appendix C. \\n\\n> the slope of each piece is being upper bounded by Lipschitz constant, which will be far off in most regions [...] Improving either issue would need to deal with \\\"cancellations\\\" I mention\\n\\nTo our current understanding, we have no reason to expect any 'cancellation' phenonenon to occur for general architectures. \\n\\nPlease also consider our addition of ablation experiments in Appendix A.3. As we mentioned in a previous comment (https://openreview.net/forum?id=r1gR2sC9FX¬eId=SJlUNA4i6m ), increasing the Lipschitz constant has a significant impact towards whether the network can match a target function in the Fourier domain, implying that e.g. when regressing high frequency functions, the bound can indeed be tight. \\n\\n> I would really like to see experiment showing Fourier coefficients at various stages of training of standard network on standard data [...] Admittedly, this is computationally expensive experiment. \\n\\nIndeed. For example, with (say) 784 dimensional inputs (on MNIST), it requires evaluating the network on a dense 784 dimensional grid. Even if the grid has 100 points per dimension, that amounts to 100^784 forward passes through the network.\"}",
"{\"title\": \"thank you\", \"comment\": \"Thank you for your time and comments.\", \"i_have_updated_my_review_to_detail_two_issues\": [\"Compelling experiments (estimating Fourier coefficients),\", \"Looseness of bounds (dealing with cancellations).\"]}",
"{\"title\": \"Reviewer Reply\", \"comment\": \"Thank you for the revisions and clarifications!\", \"regarding_architectural_choices\": \"I appreciate the authors' inclusion of the ablation experiments! I do think it would have been nice to have more extensive experiments of this kind as a prominent focus of the paper.\", \"regarding_the_effect_of_learning\": \"Thanks, I think analyzing the bias of the MSE gradient is a nice argument.\", \"regarding_the_choice_of_cost_function\": \"While it would have been interesting to have included analysis of cross-entropy in this paper, I think restriction to MSE is still suitable for scope.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for this exchange.\\n\\n > The theorem is only an upper bound so it can easily happen that Fourier spectrum is much different.\\n\\nWe reiterate that the analysis of Section 2 does not merely give a bound, but the closed-form expression for the Fourier transform (Lemma 1 and Eq 27) of deep ReLU networks. We have added Diaz\\u2019 formula for the Fourier transform of polytopes in Eq 27 of Appendix D.2.2 to make this point even more explicit. We obtain the Fourier coefficients as rational functions of k, giving the anisotropic spectral decay spelled out in Theorem 1. \\n\\n> Paper does not provide me with compelling argument that this spectral bias exists; as I said, Theorem 1 is loose\\n\\nThe analysis of Section 2 is agnostic of the learning method; so obviously, a compelling argument that a learning bias exists needs additional justification. However, as already mentioned in our previous reply, the learning bias does not emerge from Theorem 1 independently, and is certainly not contingent on the (tightness of the) bound. To reinforce the theoretical argument about the learning bias, we show in Section 3 (Eq 11) that the gradient of the MSE loss (w.r.t network parameters) inherits the spectral decay rate of the network and hence gradient descent based methods will admit such bias. As for lemma 1 and Theorem 1, they reveal the structure of the spectrum of ReLU networks which in itself is insightful, including the dependence of the bound on the architecture whose tightness we probe experimentally in Appendix A.3.\\n\\n> experiments are on synthetic data.\\n\\nNote that we are not the first to identify that there is a learning bias in deep ReLU networks. Extensive experiments on real data that support the learning bias claim have already been done in Arpit et al (2017). Our contribution is to formalize this observation using the framework of Fourier transform. We would also like to clarify the scope of our experiments: they are an integral part of the analysis. Their purpose is not merely to validate theoretical results, but to guide the pursuit of deeper theoretical insights (see discussions after experiments 1, 3 and 4).\\n\\nWe carry out these experiments in a controlled setting to minimize the effect of unknown and potentially misleading confounding factors. Like we emphasized in Part 2 of our previous answer: unlike real data, synthetic data affords us such control (e.g. over the shape of the data-manifold, the target function, the sampling distribution, etc). While we do indeed show consequences of our analysis on real data (Cifar10 and MNIST) in Appendix A4 and A5, these experiments are meant to supplement the empirical analysis of the main paper \\u2013 not to replace it.\\n\\nFinally, we note that many seminal papers make extensive use of synthetic experiments (e.g. Poole et al. 2016); in fact, many significant theoretical contributions make strong simplifying assumptions about the model (e.g Soudry et al. 2016) and are not constructed to model realistic datasets (e.g Eldan and Shamir 2015 \\u2013 the worse-case data distribution in their proof corresponds to the indicator of a L2 ball in Fourier space). We believe our work provides new insights - both theoretical and empirical - without making any strong assumptions, and in a context (the learning dynamics of deep ReLU networks) where very little is known.\\n\\n> Consequently, on real data, trend may be completely different, even reversed.\\n\\nBesides the fact that we do show experiments on real data in the appendix, we feel there is no reason to expect such an anomalous behaviour on real data given that we do not make any assumption on the target function and data (except that it is on a bounded domain, which is true for images). In fact, in Arpit et al (2017), who conduct all their experiments on real data, it was shown that deep ReLU networks always prioritized learning low complexity functions.\\n\\n> For deep networks, which are not only piecewise affine but (e.g., due to existence of adversarial examples) have highly nonsmooth regions\\n\\nPlease note that we are not claiming that deep networks are all smooth with only low frequency modes. What our analysis shows is that the learning is biased towards smooth functions, in the sense that it prioritizes low frequencies, which are learned first/faster.\\n\\nThe existence of adversarial examples is not incompatible with our claims. In fact, we hope our work in Section 4 inspires future work towards understanding adversarial examples from the perspective of sensitivity analysis under the manifold hypothesis.\\n\\n> I feel prior work I mentioned sets much higher bar in terms of what is possible\\n\\nLike we pointed out in our previous reply, the prior work you mentioned address specific problems that are different than ours, and in a different (and often simplified) context. Hence, we do not feel the comparison with our work is fair and relevant to assess our results.\"}",
"{\"title\": \"thanks\", \"comment\": \"Thank you for your comments.\\n\\nThank you for looking over prior work I mentioned. You may crawl google scholar from these to find more.\\n\\nUnfortunately, you have not moved me away from my core assessment. Succinctly:\\n\\n1. Paper does not provide me with compelling argument that this spectral bias exists; as I said, Theorem 1 is loose, and experiments are on synthetic data.\", \"explaining_further\": \"1. Based on your paragraph starting with \\\"1. What we do (our novelty)\\\", I feel I have not misrepresented your contribution; namely, (i) is Theorem 1, and (ii) and (iii) are based on synthetic experiments. The theorem is only an upper bound so it can easily happen that Fourier spectrum is much different. Similarly, experiments are stylized synthetic data. Consequently, on real data, trend may be completely different, even reversed. Indeed, Fourier transforms work best on smooth functions. For deep networks, which are not only piecewise affine but (e.g., due to existence of adversarial examples) have highly nonsmooth regions, Fourier transform can be expected to be messy. There could easily be many learning problems that seem well-behaved in every possible way, yet Fourier approach advocated here suggests otherwise. From what is provided in this paper, I simply do not know, and am not compelled to use Fourier coefficients to study implicit bias of deep networks. I feel prior work I mentioned sets much higher bar in terms of what is possible and needed to make compelling argument (due to this as well, i am not compelled to increase score based purely on technical contribution of Theorem 1).\"}",
"{\"title\": \"Response to Reviewer 4 [1/2]\", \"comment\": \"Thank you for your thorough comments and feedback.\\n\\nOur response is split in two parts.\\n\\nPart [1/2]\\n\\n**Context of the paper and related work**\\n\\nThank you for pointing out the importance of the topic! It is also our view that our work should be understood in the context of the very active lines of research on expressivity and implicit bias in neural networks. \\n\\nTo allow for a fair comparison of our work with the existing literature, we feel it is worth clarifying what is specifically addressed in our paper - and what is not:\\n\\n1. What we do (our novelty): \\n(i) We describe a computation procedure for the Fourier transform and spectrum of deep ReLU networks. To our knowledge, although the analysis is largely inspired by techniques developed by [Diaz et al. 2016] to evaluate the Fourier shape transform of polytopes, this is a new result. (ii) We build upon this result to show a learning bias of neural network towards low frequency functions. This is motivated by the recent observation made in [Arpit et al. 2017] that neural networks tend to prioritize learning simple patterns that generalize across data samples. (iii) We investigate the subtle interplay between the learnability of large frequencies and the geometry of the data manifold. We believe this a novel and original insight. \\n\\n2. What we don't do:\\n(i) Our work departs from analysis of approximation bounds. This is not our goal. \\n(ii) Although we believe this is an important and challenging problem, we do not aim at a full characterization of the implicit bias of gradient descent. This requires to tackle the learning dynamics of non-linear neural networks, which is, to the best of our knowledge, a largely open research topic -- not (directly) addressed in our paper. \\n(iii) Our goal is not to derive generalization bounds. \\n\\n**On the related work on Implicit Bias:**\\n\\nThank you for the nice references! We have included some of them in our latest revision: both in the introduction, to make the context of our work more explicit; and in the Related Work section that has been expanded accordingly. We note however that having strong theorems in the context of largely intractable systems such as neural networks often requires making strong assumptions. For instance, the main theorem of the reference Soudry et al, \\\"The Implicit Bias of Gradient Descent on Separable Data\\\", concerns logistic regression on linearly separable data!\\n\\nWe would also like to point out that the term \\\"implicit bias\\\" is quite generic and can have many interpretations. In our work, we consider learning bias in the ubiquitous class of deep ReLU networks. We believe this is an important first step for future work to build on, given how we expose in a principled manner that a learning bias in the Fourier domain indeed exists for deep ReLU networks.\\n\\n**On the related work on expressivity:**\\n\\nThank you for pointing out the missing references on architecture-dependent approximation bounds; we have included them in our new revision. We feel there may have been a confusion about the purpose of Theorem 1: it should not be understood as an approximation bound. The goal in Section 2 is to compute the Fourier transform; we choose to present the main result of Section 2.2 in the form of an asymptotic bound, which puts the emphasis on the spectral decay. We discuss more the significance of Theorem 1 below. \\n\\nOn [Eldan & Shamir 2015]: \\n\\nThank you for this nice reference! This paper makes elegant use of specific properties of the Fourier transform of 2-layer networks to show a specific depth-separation result. Although the motivation for our paper differs from theirs, their proof indeed gives a nice insight on why 2-layer networks do not approximate high frequency functions well. We have included this in the Related Work section. Note however that:\\n(i) The two papers address entirely different questions: our primary goal is to expose the spectral bias of deep ReLU networks during learning, while their goal is to illustrate the role of depth in expressivity, through a worse-case separation analysis between 2 and 3-layer networks.\\n(ii) Furthermore, the techniques developed in their paper is of little help for our goal: (a) they do not actually compute the Fourier coefficients and (b) their argument is tailored towards 2-layer networks only (i.e. those having the form $\\\\sum_i f_i(<v_i,x>)$ for some activation function $f$). \\n\\n[Continued in Part 2]\"}",
"{\"title\": \"Response to Reviewer 4 [2/2]\", \"comment\": \"Part [2/2]\\n\\n**Regarding Tightness of Bound:**\", \"for_some_context\": \"the primary motivation behind Section 2 is to develop a formal framework for understanding of the results in Sections 3 and 5 - not to derive approximation error bounds. In a recent revision, we have updated Section 3 in the manuscript to make the link to Theorem 1 more explicit. For example, we now show that the gradient of the MSE loss (w.r.t network parameters) inherits the spectral decay rate of the network function (Eq 11). Consequently, the residual (difference between function and target) at lower frequencies is weighted stronger than at higher frequencies. Folllowing a suggestion of AnonReviewer3, we also included a suite of qualitative ablation experiments demonstrating the effect of width, depth and max-norm in Appendix A.3 of the new revision. As anticipated from Theorem 1, we find that increasing depth indeed helps towards fitting high frequencies, more so than increasing width. Further, increasing the weight clip also has the same effect.\", \"on_tightness\": \"The Fourier coefficients take the general form of a rational, typically homogeneous function $C(W_\\\\epsilon, \\\\hat k) / k^{-d-1}$.\\n\\n(1) The actual inequality originates from the terms C depending on linear mappings operating on weights $W_{\\\\epsilon}$ and general unit vectors $\\\\hat{k}$, as can be seen by recursively expanding the FT of the polytope as in Diaz et al. 2016. We think that the requirement of generality leaves little scope for further cancellations and tighter bounds. \\n\\nIn other words, for a general $k$, the tightness of the bound depends on the weight matrices. Indeed, we provide empirical evidence in Appendix A.3: increasing the weight clip (i.e. by relaxing the upper bound on the parameter max-norm, and by proxy on the Lipschitz constant) has a significant impact towards whether the network can match the target function in the Fourier domain. This implies that in this particular setting, the bound must be tight, given that it is preventing the network from learning higher frequencies. \\n\\n(2) Observe that in equation 11, the inequality can only affect all frequencies uniformly. In other words, the scaling behaviour of the fourier coefficients with increasing $k$ remains intact, irrespective of the numerator. Therefore, the down-scaling of the contribution towards the loss gradient of the residual at higher frequencies is not affected by the tightness of the bound. \\n\\n**Regarding Synthetic Data**\\n\\nWhile we understand and appreciate the need for showing consequences of our analysis on real data (in Appendix), our rationale for exclusively using synthetic data in the main text is that it affords us rich control over experimental parameters (e.g. shape of the manifold, frequency of functions defined on manifold). In a sense, it allows us to study the \\\"raw\\\" behaviour of the network, unconfounded by unknown external factors that might depend on the data in uncontrollable ways. \\n\\n**In closing**\\n\\nWe hope our response and the updated revision address your concerns.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful comments!\\n\\n**On the dependence on architectural choice**\\n\\nFollowing your suggestion, we have included a suite of qualitative ablation experiments demonstrating the effect of width, depth and max-norm in Appendix A.3 of the new revision. As anticipated from Theorem 1, we find that increasing depth indeed helps towards fitting high frequencies, more so than increasing width. Further, increasing the weight clip also has the same effect. \\n\\n**On the theoretical evidence of the learning bias**\\n\\nThank you for this feedback. We completely agree that relying on the increasing spectral norm to explain the spectral bias (i.e. low frequencies learned first) can be restrictive. In the latest revision, we reinforce the theoretical argument by showing that the gradient of the MSE loss (w.r.t network parameters) inherits the spectral decay rate of the network function itself (Eq 11). Consequently, the residual (difference between function and target) at lower frequencies is weighted stronger than at higher frequencies.\", \"regarding_tightness_of_the_bound\": \"note that although we choose to present the main result of Section 2.2 in the form of an asymptotic bound in Theorem 1, our analysis (Lemma 1 together with the procedure described in Diaz et al. 2016 and explained in detail Appendix D.2) actually allows us to get the Fourier components in closed form, for a given set of weight matrices. These components typically decay as fast as $k^{-d-1}$ where d is the input data dimension (around 1000 (!) for small-scale real-world problems (e.g. 784 for MNIST)) leading to larger contribution from lower frequencies relative to the higher ones.\\n\\nOn the number $N_f$ of linear regions: During training, i.e. for a given architecture, we note that Raghu et al. 2016 provide tight upper-bounds for $N_f$ which depend on the width and depth of the network, along with the input dimensionality. The effect of $N_f$ is therefore somewhat limited (in contrast with $L_f$, which can become arbitrarily large as training progresses).\\n\\n**On the role of the cost function**\\n\\nThank you for bringing up this interesting point! Note the brief discussion of the role of the MSE loss in Section 3 (Eq 10), showing that it induces no structural bias towards any particular frequency component (there is no weight coefficient $w(k)$ before $|f(k) - \\\\lambda(k)|^2$ in Eq. 10). This allows us to eliminate the loss function as a potential confounding factor when empirically demonstrating the spectral bias. The same cannot be said of the cross-entropy loss, which could potentially introduce additional biases and thereby make it difficult to isolate the bias due to the network parameterization from that due to the loss function itself. This is precisely why we used MSE in most of our experiments. We make this clearer in the latest revision. \\n\\nNote however that we did use cross entropy in Experiment 3, which reproduces Experiment 2 in the context of classification. We obtained similar results. \\n\\nWe hope you find that our revision and clarifications address your concerns.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your constructive feedback!\\n\\n**On clarity of the message**\\n\\nThanks for this valuable feedback. We understand your concerns about clarity. To that end, we have reworked parts of our exposition, including the abstract and introduction, in the new revision, to make the message more clear.\\n\\nWe believe our work follows lines of research on the learning biases of neural networks, motivated by a lack of theoretical understanding of the good generalization performance of these models despite their large capacity [Zhang et al. 2017]. We approach this through the lens of Fourier analysis, which we think is an original and interesting angle. Our main contribution is to show a learning bias of neural networks towards low frequency functions. We believe this might be an important insight towards explaining why neural networks tend to prioritize learning simple patterns that generalize across data samples [Arpit et al. 2017]. We also investigate the subtle interplay between the learnability of large frequencies and the geometry of the data manifold, pointing that a low frequency function in input space can fit large frequencies on highly curved manifolds. \\n\\nWe hope this clarifies the picture we want to convey. \\n\\n**On the significance of Theorem 1:**\\n\\nDespite the analysis of Section 2.2 being of interest in its own right as a result on the spectral properties of ReLU networks, Theorem 1 plays a central role in developing a formal understanding of the results in Sections 3 and 5. We have revised Section 3 in the manuscript to make the link to Theorem 1 more explicit. \\n\\nAlthough we choose to present the main result of Section 2.2 in the form of an asymptotic bound in Theorem 1, note that our analysis (Lemma 1 together with the procedure described in Diaz et al. 2016 and explained in detail Appendix D.2) actually allows us to get the Fourier components in closed form, for a given set of weight matrices. These components typically decay as fast as $k^{-d-1}$ where d is the input data dimension (around 1000 (!) for small-scale real-world problems (e.g. 784 for MNIST)) leading to larger contribution from lower frequencies relative to the higher ones.\\n\\nWhile the number $N_f$ of linear regions can be indeed be large, it affects all frequencies uniformly, i.e. leaves the spectral decay rate intact. Moreover, in most practical settings one usually constrains the Lipschitz constant $L_f$ (which appears together with $N_f$ in the numerator of the bound), e.g. with weight decay, batch norm [cf. Santurkar et al. 2018], gradient penalty, spectral normalization [Miyato et al. 2018], etc.\\n\\n**On experimenting with synthetic data**: \\n\\nWhile we understand and appreciate the need for showing consequences of our analysis on real data, our rationale for exclusively using synthetic data in the main text is that it affords us rich control over experimental parameters (e.g. shape of the manifold, frequency of functions defined on manifold). In a sense, it allows us to study the \\\"raw\\\" behaviour of the network, unconfounded by unknown external factors that might depend on the data in uncontrollable ways. \\n\\n**On the MSE Going to Zero**\\n\\nIn Experiment 1 (Section 3), the mean squared error loss drops reasonably close to zero (typically around 0.05; the revision includes loss curves in the appendix). \\n\\n> GD fits lower frequencies, because it has a hard time fitting things that oscillate frequently?\", \"your_intuition_is_correct\": \"it is indeed true and one of the central themes of the paper that high-frequency components of the target function (i.e. parts of the function that oscillate frequenctly) are harder to fit. If the target function contains extremely large frequencies, the convergence can be extremely slow.\\n\\n**On regression v.s classification results:**\", \"our_report_of_the_results_might_not_have_been_clear_enough\": \"increasing L is better for *both* regression and classification. You can observe in Figure 4 that increasing L (going up a column) yields better classification accuracies.\\n\\n**In closing:** \\n\\nWe hope that our answer and revision make the main message of the paper more apparent. \\n\\n[Santurkar et al. 2018] https://arxiv.org/abs/1805.11604\\n[Miyato et al. 2018] https://arxiv.org/abs/1802.05957\\n[Zhang et al 2017] https://arxiv.org/abs/1611.03530\\n[Arpit at al 2017] https://arxiv.org/abs/1706.05394\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your insightful comments and thought provoking questions!\\n\\n**On the choice of activation**:\\n\\n> It is an interesting question whether the behaviour characterized by the authors are universal. \\n\\nIndeed! We suspect it is fairly common for the usual activation functions. For instance, Xu et al (2018) (in an independent work made available online shortly after ours) show the same behaviour for deep networks with sigmoid/tanh activation. \\n\\n**On clarify of Section 4**:\\n\\nThanks for this valuable feedback. Section 4 has been streamlined. We hope it's nicer to read now!\\n\\n**On the practical significance of the results**: \\n\\nWe believe that the potent toolbox of Fourier analysis has a lot of unharnessed potential for shedding light on fundamental issues like generalization (e.g. from a sampling theory perspective - under what conditions can the target function be reconstructed from samples?) and adversarial examples (e.g. from the perspective of sensitivity analysis under data-distributions supported on low-dimensional manifolds). We view the spectral bias as an important Fourier domain consequence of the prior of neural net parameterization, and as such, expect it to play an important role in future work. \\n\\n**About your comments/questions**:\\n\\n> Do you understand why and does Fourier spectrum provide insights into layerwise behaviour?\\n\\nWe consider it an exciting open question! We tried to look for a pattern in the layer-wise behaviour of the spectral norms, but did not find anything statistical significant over multiple runs (except that they were consistently increasing, see e.g. this plot: https://imgur.com/a/rCbAW47 ). \\n\\n> how do you find these results also hold for cross-entropy loss?\", \"we_might_not_have_been_clear_enough_on_the_set_up_of_experiment_3\": \"in fact we do use cross-entropy loss there. We threshold a sinusoid at 0.5, and train a network on the resulting binary target signal using binary cross-entropy loss (we use Pytorch's BCEWithLogitsLoss, which takes a sigmoid internally). The \\\"categorical\\\" is indeed a typo, it should be \\\"binary\\\". We have rephrased the corresponding lines in the latest revision and hope it's clearer now. We have also fixed the typos you found (thank you!).\", \"in_closing\": \"Thank you for the positive feedback. We hope to have adequately addressed your concerns. \\n\\n[Xu et al 2018] https://arxiv.org/abs/1807.01251\"}",
"{\"title\": \"theoretical and empirical analysis of implicit bias in neural networks via Fourier coefficients.\", \"review\": \"Summary.\\n\\nThis paper has theoretical and empirical contributions on topic of Fourier coefficients of neural networks. First is upper bound on Fourier coefficients in terms of number of affine pieces and Lipschitz constant of network. Second is collection of synthetic data and trained networks whereupon is argued that neural networks focus early effort upon low Fourier coefficients.\\n\\n\\nBrief evaluation.\", \"pros\": [\"This paper attacks important and timely topic: identifying and analyzing implicit bias of neural networks paired with standard training methods.\"], \"cons\": [\"\\\"Implicit bias\\\" hypothesis has been put forth by many authors for many years, and this paper does not provide compelling argument that Fourier coefficients provide good characterization of this bias.\", \"Regarding \\\"many authors for many years\\\", this paper fails to cite and utilize vast body of prior work, as detailed below.\", \"Main theorem here is loose upper bound primarily derived from prior work, and no lower bounds are given. Prior work does assess lower bounds.\", \"Experiments are on synthetic data; prior work on implicit regularization does check real data.\", \"Detailed evaluation.\", \"\\\"Implicit bias\\\" hypothesis appears in many places, for instance in work of Nati Srebro and colleagues (\\\"The Implicit Bias of Gradient Descent on Separable Data\\\" (and follow-ups), \\\"Exploring generalization in deep learning\\\" (and follow-ups), and others); it can also be found in variety of recent generalization papers, for instance again the work of Srebro et al, but also Bartlett et al, Arora et al. E.g., Arora et al do detailed analysis of favorable biases in order to obtain refined generalization bound. Consequently I expect this paper to argue to me, with strong theorems and experiments, that Fourier coefficients are a good way to assess implicit bias.\", \"Theorem 1 is proved via bounds and tools on the Fourier spectra of indicators of polytopes due to Diaz et al, and linearity of the Fourier transform. It is only upper bound (indeed one that makes no effort to deal with cancellations and thus become tight). By contrast, the original proofs of depth separation for neural networks (e.g., Eldan and Shamir, or Telgarsky, both 2015), provide lower bounds and metric space separation. Indeed, the work of Eldan&Shamir extensively uses Fourier analysis, and the proof develops a refined understanding of why it is hard for a ReLU network to approximate a Fourier transform of even simple functions: it has to approximate exponentially many tubes in Fourier space, which it can only do with exponentially many pieces. While the present paper aims to cover some material not in Eldan&Shamir --- e.g., the bias with training --- this latter contribution is argued via synthetic data, and overall I feel the present work does not meet the (high) bar set by Eldan&Shamir.\", \"I will also point out that prior work of Barron, his \\\"superposition\\\" paper from 1993, is not cited. That paper presents upper bounds on approximation with neural networks which depends on the Fourier transform. There is also follow-up by Arora et al with \\\"Barron functions\\\".\", \"For experiments, I would really like to see experiment showing Fourier coefficients at various stages of training of standard network on standard data and standard data but with randomized labels (or different complexity in some other way). These Fourier coefficients could also be compared to other \\\"implicit bias\\\" quantities; e.g., various norms and complexity measures. In this way, it would be demonstrated that (a) spectral bias happens in practice, (b) spectral bias is a good way of measuring implicit bias. Admittedly, this is computationally expensive experiment.\", \"Regarding my claim that Theorem 1 is \\\"loose upper bound\\\": the slope of each piece is being upper bounded by Lipschitz constant, which will be far off in most regions. Meanwhile, Lemma 1, \\\"exact characterization\\\", does not give any sense of how the slopes relate to weights of network. Improving either issue would need to deal with \\\"cancellations\\\" I mention, and this is where it is hard to get upper and lower bounds to match.\", \"I feel this paper could be made much stronger by carefully using the results of all this prior work; these are not merely citation omissions, but indeed there is good understanding and progress in these papers.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Intriguing topic and analysis, but its impact on understanding of neural nets seems limited\", \"review\": \"Synopsis:\\nThis paper analyzes deep Relu neural networks based on the Fourier decomposition of their input-output map. They show theoretically that the decomposition is biased towards low frequencies and give some support that low frequency components of a function are learned earlier under gradient descent training.\", \"pros\": \"--Fourier decomposition is an important and (to the best of my knowledge) mostly original angle from which the authors analyze the input-output map governing neural networks. There is some neat mathematical analysis contained here based off of the piecewise-linearity of deep Relu nets and Fourier decomposition of polytopes in input space.\\n\\n--The setup in the toy experiments of Sec. 4 seems novel & thoughtful; the authors consider a lower-dimensional manifold embedded in a higher dimensional input space, and the Fourier decomposition of the composition of two functions is related to the decomposition of constituents.\", \"cons\": \"--While this paper does a fairly good job establishing that NNs are spectrally biased towards low frequencies, I\\u2019m skeptical of its impact on our understanding of deep neural nets. Specifically, at a qualitative level it doesn\\u2019t seem very surprising: intuitively (as the authors write in Sec. 5), capturing higher frequencies in a function requires more fine tuning of the parameters. At initialization, we don\\u2019t have such fine tuning (e.g. weights/biases drawn i.i.d Normal), and upon training it takes a certain amount of optimization time before we obtain greater \\u201cfine tuning.\\u201d At a quantitative level, these results would be more useful if (i) some insight could be gleaned from their dependence on the architectural choices of the network (in particular, depth) or (ii) some insight could be gained from how the spectral bias compares between deep NNs and other models (as is discussed briefly in the appendix -- for instance, kernel machines and K-NN classifiers). The primary dependence in the spectral decay (Theorem 1) seems to be that it (i) decays in a way which depends on the input dimensionality in most directions and (ii) it is highly anisotropic and decays more slowly in specific directions. The depth dependence seems to arise from the constants in the bound in Theorem 1 (see my comment below on the bound). \\n\\n--Relying on the growth of the weight norm to justify the network's bias towards learning lower frequencies earlier in training seems a bit tenuous to me. (I think the stronger evidence for learning lower frequencies comes from the experiments.) In particular, I'm not sure I would use the bound in Theorem 1 to conclude what would happen to actual Fourier components during training, since the bound may be far from being met. For instance, (1) the number of linear regions N_f changes during training -- what effect would this have? Also, (2) what if one were to use orthogonal weight matrices for training? Presumably the network would still train and generalize but the conclusions might be different (e.g. the idea that growth of weight norms is the cause of learning low frequency components earlier).\", \"miscellaneous\": \"--Would appreciate a greater discussion on the role of the cost function (MSE vs cross-entropy) in the analysis or experiments. Are the empirical conclusions mostly identical?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting ideas; message unclear\", \"review\": \"The paper considers the Fourier spectrum of functions represented by Deep ReLU networks, as well as the relationship to the training procedure by which the network weights can be learned.\\n\\nIt is well-known (and somewhat obvious) that deep neural networks with rectifier activations represent piecewise linear continuous function. Thus, the function can be written as a sum of the products of indicators of various polytopes (which define the partition of R^d) and the linear function on that polytope. This allows the authors to compute the Fourier transform (cf. Thm. 1) and the magnitude of f(k) decays as k^{-i} where the i can depend on the polytope in some intricate fashion. Despite the remarks at the end of Thm 1, I found the result hard to interpret and relate to the rest of the paper. The appearance of N_f in the numerator (which can be exponentially large in the depth) may well make these bounds meaningless for any networks that are relevant in practice.\\n\\nThe main paper only has experiments on some synthetic data.\", \"sec_3\": \"Does the MSE actually go to 0 in these experiments? Or are you observing that GD fits lower frequencies, because it has a hard time fitting things that oscillate frequently?\", \"sec_4\": \"I would have liked to see a clearer explanation for example of why increasing L is better for regression, but not for classification. As it stands I can't read much from these experiments.\\n\\nOverall, I feel that there might be some interesting ideas in this paper, but the way it's currently written, I found it very hard to get a good \\\"picture\\\" of what the authors want to convey.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Analysis of Spectral Bias of ReLU networks\", \"review\": \"Analysis of Spectral Bias of ReLU networks\\n\\nThe paper uses Fourier analysis to study ReLU network utilizing its continuous piecewise linear structure.\\n\\nMain finding is that these networks are biased towards learning low frequency which authors denote `spectral bias\\u2019. This provides another theoretical perspective of neural networks preferring more smooth functions while being able to fit complicated function. Also shows that in terms of parameters networks representing lower frequency modes are more robust.\", \"pro\": [\"Nice introduction to Fourier analysis providing non-trivial insights of ReLU networks.\", \"Intuitive toy experiments to show spectral bias and its properties\", \"Thorough theoretical analysis and empirical support\"], \"con\": [\"The analysis is clearly for ReLU networks although the title may provide a false impression that it corresponds to general networks with other non-linearities. It is an interesting question whether the behaviour characterized by the authors are universal.\", \"At least for me, Section 4 was not as clearly presented as other section. It takes more effort to parse what experiments were conducted and why such experiments are provided.\", \"Although some experiments on real dataset are provided in the appendix, I personally could not read much intuition of theoretical findings to the networks used in practice. Does the spectral bias suggest better way of training or designing neural networks for example?\", \"Comments/Questions:\", \"In Figure 1, two experiments show different layerwise behaviour, i.e. equal amplitude experiment (a) shows spectral norm evolution for all the layers are almost identical whereas in increasing amplitude experiment (b) shows higher layer change spectral norm more than the lower layer. Do you understand why and does Fourier spectrum provide insights into layerwise behaviour?\", \"Experiment 3 seems to perform binary classification using thresholding to the logits. But how do you find these results also hold for cross-entropy loss?\", \"\\u201cThe results confirm the behaviour observed in Experiment 2, but in the case of classification tasks with categorical cross-entropy loss.\\u201d\"], \"nit\": \"p3 ReLu -> ReLU / p5 k \\\\in {50, 100, \\u2026 350, 400} (close bracket) / p5 in Experiment 2 and 3 descriptions the order of Figure appears flipped. Easier to read if the figure appears as the paper reads / p7 Equation 11 [0, 1]^m\\n\\n\\n********* updated review *************\\n\\nBased on the issues raised from other reviewers and rebuttal from authors, I started to share some of the concerns on applicability of Thm 1 in obtaining information on low k Fourier coefficients. Although I empathize author's choice to mainly analyze synthetic data, I think it is critical to show the decays for moderately large k in realistic datasets. It will convince other reviewers of significance of main result of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SJg6nj09F7 | NEURAL MALWARE CONTROL WITH DEEP REINFORCEMENT LEARNING | [
"Yu Wang",
"Jack W. Stokes",
"Mady Marinescu"
] | Antimalware products are a key component in detecting malware attacks, and their engines typically execute unknown programs in a sandbox prior to running them on the native operating system. Files cannot be scanned indefinitely so the engine employs heuristics to determine when to halt execution. Previous research has investigated analyzing the sequence of system calls generated during this emulation process to predict if an unknown file is malicious, but these models require the emulation to be stopped after executing a fixed number of events from the beginning of the file. Also, these classifiers are not accurate enough to halt emulation in the middle of the file on their own. In this paper, we propose a novel algorithm which overcomes this limitation and learns the best time to halt the file's execution based on deep reinforcement learning (DRL). Because the new DRL-based system continues to emulate the unknown file until it can make a confident decision to stop, it prevents attackers from avoiding detection by initiating malicious activity after a fixed number of system calls. Results show that the proposed malware execution control model automatically halts emulation for 91.3\% of the files earlier than heuristics employed by the engine. Furthermore, classifying the files at that time improves the true positive rate by 61.5%, at a false positive rate of 1%, compared to a baseline classifier. | [
"malware",
"execution",
"control",
"deep reinforcement learning"
] | https://openreview.net/pdf?id=SJg6nj09F7 | https://openreview.net/forum?id=SJg6nj09F7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlDoe1xg4",
"HklnZlx90m",
"SJglUyl5RQ",
"SJlG7yecRX",
"r1lI1xpkam",
"HkeV9i9227",
"SyeeaLc52X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544708254779,
1543270404174,
1543270215568,
1543270169885,
1541554141702,
1541348236272,
1541215928071
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper756/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper756/Authors"
],
[
"ICLR.cc/2019/Conference/Paper756/Authors"
],
[
"ICLR.cc/2019/Conference/Paper756/Authors"
],
[
"ICLR.cc/2019/Conference/Paper756/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper756/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper756/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper trains a classifier to decide if a program is a malware and when to halt its execution. The malware classifier is mostly composed of an RNN acting on featurized API calls (events). The presentation could be improved. The results are encouraging, but the experiments lack solid baselines, comparisons, and grounding of the task usefulness, as this is not done on an established benchmark.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting application of DRL, but too confuse presentation and too little experimental for the task and results\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you very much for reviewing the paper and your helpful feedback.\\n1. We believe this paper is important because it shows another \\u201creal-world\\u201d application of DRL on a very important problem in the security field. To the best of our knowledge, it is also the first study in security field to detect the malware using a DRL-based system. The originality of this paper is from a novel neural malware control model which learns the best time to halt the execution of an unknown file based on deep reinforcement learning. Also, our model contains several inter-connected submodules, and the DRL model is one of the important modules in the system. The focus of this paper is not an improved DRL algorithm, but to propose a novel system-level design using both DRL and DNN models to protect users from malware. Using DRL offers significant improvement (improves the true positive rate by 61.5%, at a false positive rate of 1%) compared to the best previously proposed solution.\\nAnother contribution of this paper is the unique reward design for this specific problem, the choice of rewards is not arbitrary but follows the two rules as described in paper: a) the DRL network can learn to halt emulation as quickly as possible. b) The closer an event prediction is to the true label of the file, the larger the reward should be given at that state. Only because the reward is designed following these two rules, better results for malware detection are obtained. It is also the use of deep reinforcement learning that gives us the flexibility to design the rewards satisfying our requirements.\\n\\n2. As suggested by the reviewer, we added one more experiment using a CNN-based classifier in Figure 6. Also, in (Athiwaratkun & Stokes (2017)), it was shown that the model with an attention layer performs worse than our baseline model with an LSTM and a max pooling layer on this malware classification task. For this reason, we did not compare the models with the one with an attention layer in our experiments. The max pooling layer seems to work much better due to sporadic nature of the malicious activity in malware.\\nThere are indeed many other models which provide good performance on various classification tasks, however, we are more interested to compare with recently proposed state-of-the-art models related to malware detection tasks specifically, as in (Athiwaratkun & Stokes (2017) and Kolosnjaji et al (2016)).\", \"reference\": \"1. Ben Athiwaratkun and Jack W Stokes. Malware classification with lstm and gru language models and a character-level cnn. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2482\\u20132486. IEEE, 2017.\\n2. Bojan Kolosnjaji, Apostolis Zarras, George Webster, and Claudia Eckert. Deep learning for classification of malware system call sequences. In Australasian Joint Conference on Artificial Intelligence, pp. 137\\u2013149. Springer International Publishing, 2016.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you very much for reviewing the paper and your helpful feedback.\\n1.\\tFirst, it is very natural to have a very small action space in reinforcement learning. For example, in solving a maze problem using RL, the action space only includes \\u201cleft\\u201d, \\u201cright\\u201d, \\u201cup\\u201d and \\u201cdown\\u201d four actions. In learning to play blackjack game using reinforcement learning (Sutton & Barto (1998)), the action space only includes \\u201cfold\\u201d and \\u201chit\\u201d two actions.\\nSecond, as given by the state definition in the paper, a previous action will affect the current state. For instance, if a \\u201cHalt\\u201d action is chosen at the last state, the current state will be the same as the last state (i.e. loop back to the last state). Comparatively, if a \\u201ccontinue\\u201d action is selected at the previous state, the current state will contain a new event; at the same time, the previous events\\u2019 information is stored in the histogram.\\nA very important reason why we choose to use DRL instead of standard deep learning or other supervised learning models for this problem is because there is no ground truth for the stop/continue decisions for an anti-malware engine. The engine can only learn \\u201cwhen to stop\\u201d based on the reward/penalty between its interaction with the input file. The engine will stop if it is confident enough to decide:\\na. The file is a malware or\\nb. The file is benign, and the engine can stop wasting computational resources.\\nThis conclusion is gradually learned through the interaction between the engine and the input file by accumulating the confidence rewards (either positive or negative). The procedure is most suitable to be modeled by reinforcement learning, which can learn the action using accumulated information without a concrete ground truth.\\n2.\\tWe were not clear in the earlier draft, but we have compared the proposed system to many different baselines. We have made the point clearer in the text. As shown in Figure 6 and 7, we have compared our model\\u2019s result with 5 baselines using the recently proposed state-of-the-art models for malware classification. In the revised draft, we added another experiment using the convolutional neural network (CNN) based classifier introduced in Kolosnjaji et al (2016) for malware classification task and another one for a single hidden layer neural network. It is shown that our DRL-based model outperforms all 7 baselines in terms of the true positive rate at the same false positive rate.\\nConsidering the uniqueness of this type of malware detection task, there is no public dataset available. Also due to the sensitivity of the malware information, companies do not share these datasets in order to prevent attackers from using the datasets to generate adversarial attacks in the wild.\", \"reference\": \"1.\\tA Sutton, Richard S., and Andrew G. Barto. Introduction to reinforcement learning. Vol. 135. Cambridge: MIT press, 1998\\n2.\\tBojan Kolosnjaji, Apostolis Zarras, George Webster, and Claudia Eckert. Deep learning for classification of malware system call sequences. In Australasian Joint Conference on Artificial Intelligence, pp. 137\\u2013149. Springer International Publishing, 2016.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you very much for reviewing the paper and your helpful feedback.\\n1. We added a paragraph to Appendix A to explain the labeling procedure of the files which describes that the labels are production labels that are used for training production antimalware classifiers. We also updated Appendix A to indicate that these labeled files contain the system calls generated by running the file in a production anti-malware emulator in the company\\u2019s production environment. We can move this section earlier in the paper if it is okay to go beyond 8 pages.\\n2. We added two more baselines for the file classifier in Figure 6. One is trained using a simple neural network model with only one hidden layer, and the other is trained using a conventional neural network (CNN) based model introduced in Kolosnjaji et al. (2016) for malware classification. The original 5 baseline models are recently proposed state-of-the-art deep sequence-based models for malware detection.\", \"reference\": \"1. Bojan Kolosnjaji, Apostolis Zarras, George Webster, and Claudia Eckert. Deep learning for classification of malware system call sequences. In Australasian Joint Conference on Artificial Intelligence, pp. 137\\u2013149. Springer International Publishing, 2016.\"}",
"{\"title\": \"Possibly useful malware detector but unclear paper and uncharacterized black box labels in dataset\", \"review\": \"This paper attempts to train a predictor of whether software is malware. Previous studies have emulated potential malware for a fixed number of executed instructions, which risks both false negatives (haven\\u2019t yet reached the dangerous payload) and false positives (malware signal may be lost amidst too many other operations). This paper proposes using deep reinforcement learning over a limited action space: continue executing a program or halt, combined with an \\u201cevent classifier\\u201d which predicts whether individual parts of the program consist of malware. The inputs at each time step are one of 114 high level \\u201cevents\\u201d which correspond to related API invocations (e.g. multiple functions for creating a file). One limitation seems to be that their dataset is limited only to events considered by a \\\"production malware engine\\\", so their evaluation is limited only to the benefit of early stopping (rather than continuing longer than the baseline malware engine). They evaluate a variety of recurrent neural networks for classifying malware and show that all significantly underperform the \\u201cproduction antimalware engine\\u201d. Integrating the event classifier within an adaptive execution control, trained by DQN, improves significantly over the RNN methods.\\n\\nIt might be my lack of familiarity with the domain but I found this paper very confusing. The labeling procedure (the \\\"production malware engine\\u201d) was left entirely unspecified, making it hard to understand whether it\\u2019s an appropriate ground-truth and also whether the DRL model\\u2019s performance is usable for real-world malware detection. \\n\\nAlso, the baseline models used an already fairly complicated architecture (Figure 3) and it would have been useful to see the performance of simple heuristics and simpler models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"DRL for malware control\", \"review\": \"The paper proposes an approach to use deep reinforcement learning to halt execution in detecting malware attacks. The approach seems interesting, there are some problems.\\n\\n1. There is no good justification of using DRL to the problem. Action space is only continue and halt. Besides there should be no effect to the result by the previous action. So I don't think DRL is a good selection.\\n2. Experiments are weak. There is no detailed comparison to other existing works. Only one dataset is used.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"using REINFORCEMENT LEARNING for NEURAL MALWARE CONTROL\", \"review\": \"This paper uses deep reinforcement learning (DRL) for malware detection. It can get better performance than LSTM or GRU based models.\\n\\nDeep reinforcement learning (DRL) has already used for classification or detection. I am not sure about the main contribution of this work. The new application of DRL can not convince me.\\n\\nAs the dataset is not a public dataset, it is difficult to evaluate the performance. As for the comparing models, i think some CNN based methods should be included. If the task is a detection, i think some attention methods should also be investigated and compared. LSTM combined with attention should already be well investigated in other classification/detection tasks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
rkgT3jRct7 | Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation | [
"Sang-Woo Lee",
"Tong Gao",
"Sohee Yang",
"Jaejun Yoo",
"Jung-Woo Ha"
] | Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems. AQM benefits from asking a question that would maximize the information gain when it is asked. However, due to its intrinsic nature of explicitly calculating the information gain, AQM has a limitation when the solution space is very large. To address this, we propose AQM+ that can deal with a large-scale problem and ask a question that is more coherent to the current context of the dialog. We evaluate our method on GuessWhich, a challenging task-oriented visual dialog problem, where the number of candidate classes is near 10K. Our experimental results and ablation studies show that AQM+ outperforms the state-of-the-art models by a remarkable margin with a reasonable approximation. In particular, the proposed AQM+ reduces more than 60% of error as the dialog proceeds, while the comparative algorithms diminish the error by less than 6%. Based on our results, we argue that AQM+ is a general task-oriented dialog algorithm that can be applied for non-yes-or-no responses. | [
"questioner",
"mind",
"answerer",
"aqm",
"question",
"information gain",
"dialog",
"error"
] | https://openreview.net/pdf?id=rkgT3jRct7 | https://openreview.net/forum?id=rkgT3jRct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SklgQGfhrE",
"rkepgP_ggN",
"SkxfDTUqyV",
"ByxXr6Uq1N",
"Skxex6UqJV",
"SkxcCCDCAm",
"rke8ETvCA7",
"BJgpibpoR7",
"rkg3kkhtRm",
"Hkx4xHhap7",
"HJxDa4nTa7",
"r1lvHN2ppX",
"H1xzM43T67",
"SkxX14h6T7",
"BJeqFMhapX",
"rJg0MNRKhX",
"rkedtsMYhm",
"Skl_DESX2X"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1550750232254,
1544746740778,
1544346970294,
1544346938879,
1544346855775,
1543565009658,
1543564589593,
1543389604544,
1543253732367,
1542468844103,
1542468799082,
1542468671502,
1542468618146,
1542468571023,
1542468226469,
1541166101654,
1541118847645,
1540736096214
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/Authors"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper755/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Camera Ready Version\", \"comment\": \"We uploaded the camera ready version of our paper. Also, our code is now publically available: https://github.com/naver/aqm-plus\"}",
"{\"metareview\": \"Important problem (visually grounded dialog); incremental (but not in a negative sense of the word) extension of prior work to an important new setting (GuessWhich); well-executed. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}",
"{\"title\": \"Thank to Reviewer 2\", \"comment\": \"Thank you for your consideration and interest in our paper!\\nWe also once again thank you for your comments and valuable suggestions for improving the quality of our paper.\"}",
"{\"title\": \"Thank to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for considering our responses. We hope that our ablation studies may be able to alleviate the concerns you raised.\\n\\nWe would like to appeal that the various techniques we suggested to scale up AQM enable the our learning paradigm to be extended further, which is worth sharing with the broader community, including the researchers in multi-modal grounded learning, task-oriented dialog, multi-agent learning, emergent communication, and others.\"}",
"{\"title\": \"Thank to Reviewer 1\", \"comment\": [\"We would like to thank the reviewer for your consideration and further comments. We are glad to hear that the revised manuscript is much easier to follow.\", \"The main argument on previous RL algorithms in our paper is that RL did not effectively train the agents for the cooperative game of two machine agents in the setting of the previous works. However, our argument in the statement \\u201cthere are some reports\\u2026 looks like human\\u2019s dialog\\u201d is in a somewhat different context from the main argument. The latter argument implies that even if RL could train their two machine agents effectively for the cooperative game of two agents, the performance on a machine-human game might decrease. This is caused by that the machine agent\\u2019s distribution is likely to become far away from the distribution of human\\u2019s. We will further clarify this argument in the statement and clearly separate the aforementioned two arguments in our next manuscript.\", \"We will add the description on how AQM\\u2019s explicit calculation on information gain and posterior has an advantage over the RL, which depends on the capacity of the RNN in the paper.\", \"We will add the comparison and connection between mutual information approximation in AQM and comparative neural methods.\"]}",
"{\"title\": \"Updated thoughts based on author responses.\", \"comment\": \"I have updated my review with comments with respect to the author responses to the review. In general, the authors addressed issues regarding clarification concerns. In addition, I have also provided concerns that might make the paper stronger. I still think the contributions are incremental over AQM.\\n\\nGiven the author response, I have also increased my rating of the paper to 6.\"}",
"{\"title\": \"Comments and updates regarding the author response\", \"comment\": [\"Thanks to the authors for providing detailed comments and explanations wherever applicable with respect to the comments provided in the review and updating the paper to reflect the same. In light of the comments below (with respect to the response of the points raised in the review) I am inclined towards increasing my rating for the paper. I will also mention some updates to the revised paper to make the distinction from previous approaches and proposed hypotheses behind the improvements relative to the same more clear which might make the paper stronger.\", \"Thanks for clearly specifying the differences between discriminative and generative models in context of AQM+ and clarifying the reasoning and justification behind several design choices made in the paper. In general, given the revised version, I think the paper is much more easier to follow.\", \"The section highlighting the distinction between AQM+ and the self-play RL approach to the GuessWhich task should explcitly (corresponding to the response mentioned w.r.t. the comment on the earlier version of the statement - \\u201cthere are some reports\\u2026 looks like human\\u2019s dialog\\u201d) should explicitly highlight why AQM+ might not suffer the same consequences unlike the RL setting even with intermediate rewards in the latter paradigm.\", \"It makes sense that restricting MI computation to the top-k samples might render the computation tractable (still biased) -- but it still is not Mutual Information but rather seems like a top-k variant of the Maximum Mutual Information (MMI) criterion. I think the authors should explicitly examine this connection and mention this in the paper. The reason this distinction seems important is because intractable computations of MI in literature have been tackled via variational bounds and since the paper is not doing so -- examining the same and drawing appropriate connections in the paper seem important.\"]}",
"{\"title\": \"Good paper but the approach is slightly incremental\", \"comment\": \"Thanks for the responses, and I will keep my current score (6; higher than the threshold).\\nThe main reason is that the approach is slightly incremental instead of fully original.\"}",
"{\"title\": \"Re:Response\", \"comment\": \"Thanks for these updates and paper revisions. I've updated my review above.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": [\"[B-3] How important is the choice to restrict to |C| classes?\", \"In the case of |A| = |C|, we sample candidate answers by generating a top-1 answer from aprxAgen for each candidate question and each candidate class. However, our submitted manuscript does not explain the case where |A| != |C|. For the ablation study on |C|, we extended our sampling method on candidate answers to cover such a case and explained it in the AQM+ subsection (Subsection 3.3).\", \"We conducted the ablation studies on |Q|, |A|, and |C|. In the ablation study on |C|, we changed |C| but fixed |Q|=|A|=20. |Q| has the most effect, whereas |A| has the least effect. Figure 5 (b-d) describes the experimental result in the ablation study subsection.\", \"[C] No evaluation of Visual Dialog Metrics.\", \"As discussed in the reply to AnonR1, we used the same Abot model as that of Das et al. (2017a), and thus the performance on the Visual Dialog Metrics that the reviewer asked for would be the same as the one reported in Das et al. (2017b). There is no metric suggested for Qbot in the paper of Das et al. (2017a). Please let us know if this does not seem to address the point you made.\", \"[D] No discussion of inference time\", \"AQM+ generates one question within around 3s at K=20, whereas SL generates one question within 0.1s. We used Tesla P40 for our experiments. Though the complexity of our information gain is O(K^3), K does not increase the time required for the whole inference in proportion to the cube of K, when K = 20. It is because calculating the information gain is not the sole resource-intensive part in the whole inference process and we do parallel processing for the calculation of aprxAgen using GPU. We did not fully optimize the inference time of AQM+ yet, and the inference time would further decrease if more parallel processing techniques are applied. We added the description on this issue in the discussion section.\", \"[E] Lack of Comparison to Base AQM\", \"The main setting of AQM+ in the paper uses 20x20x20 calculations for information gain. On the other hand, the base AQM requires 20 x infinity x 10000 calculations for information gain, which makes the computation of the base AQM intractable. Even if we have 100 candidate answers as in Visual Dialog (Das et al. (2017a)), the base AQM requires 2500 times as many calculations (20M) as AQM+. We added the description on this issue in the discussion section.\", \"On the other hand, we conducted extensive ablation studies to indirectly compare the base AQM with our AQM+, as explained above.\"], \"minor_things\": \"[Minor1] I don't understand the 2nd claimed contribution from the introduction \\\"At every turn, AQM+ generates a question considering the context of the previous dialog, which is desirable in practice.\\\" Is this claim because the aprxAns module uses history?\\n- In the perspective of ablation study, this sentence that describes the contribution of AQM+ has two meanings. First, it means that Qgen generates candidate questions considering the context of the dialog history at every turn. Its effect is related to the result of the ablation study on gen1Q. Second, it also means that Qinfo uses aprxAgen, which considers the context of the dialog history (Eq. 3). Its effect is related to the result of the history ablation study.\\n \\n\\n[Minor3] The RL-QA qualitative results, are these from non-delta or delta? Is there a difference between the two in terms of interpretability?\\n- The RL-QA qualitative results come from delta setting. There is no difference between the two settings in terms of interpretability. Delta setting is just a configuration of hyperparameters where the difference with non-delta setting is that it uses a different weight on one of the loss functions (the model of Das et al. (2017b) optimizes the weighted sum of different loss functions) and a different value for learning rate decay.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": [\"AQM+ generates candidate questions from Qgen and candidate answers from arpxAgen at every turn. This makes AQM+ more efficient and practical for task-oriented dialog systems compared to the base AQM.\", \"[A-1] The original AQM paper explores this exact setting for GuessWhat in Section 5.2 -- generating the top-100 questions from a pretrained RNN question generator via beam search and ranking them based on information gain. From my understand, this aspect of the approach is not novel.\", \"In the recent arXiv manuscript (18/09/21) of Lee et al. (2018), the discussion section includes an experimental setting in which a predefined question set Q_fix consists of the questions generated by an SL-trained seq-to-seq model. Our candidate question generation method in AQM+ can be understood as a future work of the previous paper.\", \"To compare the setting of AQM with that of AQM+, we performed an additional ablation study with AQM+ which uses Q_fix generated from a seq2seq model same as the previous AQM. We refer to this setting as gen1Q.\", \"In non-delta and indA setting, gen1Q, the ablated AQM+ model, achieved 90.76% with |Q| = 20 at the 10th round, whereas AQM+ achieved 94.64%at the 10th round.\", \"As in the previous AQM paper, we increased |Q| from 20 to 100. However, gen1Q with |Q| = 100 achieved only 92.50%, which is still lower than the PMR of AQM+. Note that this setting even requires five times as many computations to calculate the information gain as the original AQM+. The result is illustrated in Figure 10.\", \"Our result is similar to the result of Lee et al. (2018). In their report, using a predefined question set generated from a seq2seq model performs slightly better than using a predefined question set extracted from the training data at the 2nd turn (49.79% -> 51.07%, accuracy in GuessWhat), but performs worse at the 5th turn (72.89% -> 70.74%).\", \"[A-2] I disagree that this is a departure from the AQM approach. In fact, the detailed algorithm explanation in Appendix A of the AQM paper explicitly discusses the possibility of the answer generator being an RNN model.\", \"As you mentioned, the previous AQM paper discussed the possibility of using RNN for inferring the distribution of the answer sentence. However, such an extension of approach would be natural and straightforward only in a case where a pre-defined candidate set of answers exists as in Das et al. (2017a). Otherwise, extending the previous AQM approach would not be trivial.\", \"That said, one of the possible natural extensions from AQM to tackle this problem would be to select candidate answers from the training set, like selecting candidate questions in the previous AQM paper. We performed an ablation study on this setting. Random selection of candidate answers decreases the performance from 94.64% to 92.78% at indA, non-delta, and the 10th round. This is because most of the candidate answers are relevant to neither the candidate questions nor the candidate classes. The result is illustrated in Figure 4 (b).\", \"[B-0] Design decisions are not well justified experimentally.\", \"The primary goal of our research is not to find the optimal design for the GuessWhich task, but to make the AQM framework more generalizable and applicable. We would have tried to optimize the model further with other ideas if it were necessary. However, we agree that it is important to conduct a strong analysis on how each of the modifications in AQM+ contributes to the good performance. Therefore, we conducted various ablation studies the reviewer mentioned as explained below.\", \"[B-1] I would have liked to see a comparison to a Q_fix set samples from training.\", \"The ablation study result with a Q_fix randomly extracted from the training data (randQ) showed accuracy degradation (92.79% at indA, non-delta, and the 10th round) compared to the intact AQM+ (94.64%). The result is illustrated in Figure 11.\", \"Regardless of the PMR, questions retrieved in randQ setting seem to be relevant to neither the caption nor the target image. Figure 13 is revised to include dialogs constructed under this setting.\", \"[B-2] How important is dialog history to the aprxAns model?\", \"Dialog history helps to guess the target image but is not critical. Ablating history makes the performance decrease by 0.22% and 0.56% for indA and depA in non-delta, respectively, and 0.46% and 0.21% for indA and depA in delta, respectively. The results are illustrated in Figure 12.\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": [\"Q. In the ablation study, what is the performance of removing Qpost and remaining Qinfo (asking questions using AQM+, and guessing with an SL-trained model)?\", \"Thank you for your suggestion. We added the experimental results in Figure 9. According to the result, it seems that AQM+\\u2019s Qinfo does not improve the performance of SL\\u2019s guesser (Qscore).\", \"For delta setting, we think that the SL guesser is not able to exploit the information from the answers, because the experimental result on SL shows that there is no significant improvement in PMR throughout the dialog in delta setting.\", \"For the non-delta case, it seems that the caption gives dominant information to SL\\u2019s guesser rather than dialog history. Thus, questions which often appear with the caption would provide a more clear signal to SL\\u2019s guesser for predicting the target class. Figure 9 (a) shows that SL-Q performs better than RL-Q in the early phase, but SL-Q\\u2019s performance decreases faster than that of RL-Q in the later phase. We think it is because SL-Q generates the question to be more likely to have co-appeared with the caption than RL-Q. Likewise, it seems that AQM+\\u2019s question does not help SL\\u2019s guesser because AQM+ generates questions that are more independent of the caption. We also added the description on this ablation study in the paper.\", \"Q. In the experiments, the baselines do not contain AQM.\", \"Fundamentally, it is intractable for previous AQM to deal with GuessWhich due to its large search space. For enabling comparisons, however, we defined several separated AQM-based baselines by replacing each of the components of AQM+ with that of the previous AQM, and then performed ablation studies using those baselines. With these experiments, we empirically showed how much significant our ideas of AQM+ are from the perspective of performance improvement.\"]}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Q. Can the authors also show performances for the GuessWhich models (under the AQM+ framework) on the original retrieval metrics for Visual Dialog mentioned in Das et al. (2017a)?\\n- We used the same Abot model of Das et al. (2017b), which is the same model of Das et al. (2017a). Thus, the performance on the retrieval metrics that the reviewer asked for would be the same as the one reported in Das et al. (2017a). There is no retrieval metric for Qbot in Visual Dialog. As far as we know, PMR is the only available metric for Qbot.\\nPlease let us know if this does not seem to address the point you made.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": [\"We clarified the comments from the reviewer and proofread our paper. Using ablation studies, we empirically showed how much contribution does AQM+ make in the perspective of algorithm, compared to the previous AQM. We also replied to each of your concerns as follows.\", \"Q. The authors should clearly make this distinction between discriminative and generative dialog models.\", \"We explained a distinction between discriminative (retrieval) and generative models (token-level generation) in the revision.\", \"We revised a few expressions and rearranged the citations in the first paragraph, making the first paragraph explain only generative models. We also added the third paragraph to describe a distinction between discriminative and generative models and to explain that the previous AQM, which can be understood as a discriminative model, is extended to a generative model in our AQM+ work.\", \"Q. As far as I understand, even in the AQM approach the numerator and the denominator within the logarithm are estimated from a different set of parameters\", \"The numerator and denominator in the previous AQM (and our AQM+) are estimated from the same set of parameters, which is of the aprxAgen. It is because the denominator (\\\\tilde\\u2019{p}) comes from the posterior (\\\\hat{p}) and the numerator (\\\\tilde{p}) (Eq. 6 in our paper), but the posterior comes from the numerator (Eq. 2 in our paper). Please let us know if this does not seem to address the point you made.\", \"Q. Could the authors comment more on why restricting the space of r.v.\\u2019s to some top-k samples is a good idea? Would that not lead to somewhat of a biased estimator?\", \"We agree that top-k samples could lead our approximation to be biased toward plausible (high-probability) candidate classes and answers. However, our main goal is to reduce the entropy over plausible candidate classes and answers, not over the whole candidate classes and answers. For this reason, we think our choice is practical for real task-oriented dialog systems. We added this explanation on the manuscript.\", \"Q. The training paradigm of the indA and the depA setting seems odd and confusing.\", \"We follow the perspective and learning paradigm used in the previous AQM paper.\", \"Following the perspective, we argue that the SL algorithm and the indA setting are similar to each other in that Qbot is trained from the training data. Likewise, the RL algorithm and the depA setting are similar in that Qbot is trained from Abot\\u2019s responses and the reward of the game. Lee et al. (2018) also argued that the objective function of AQM is similar to that of RL, which links the learning paradigm of AQM with RL.\", \"Q. The authors state that \\u201cthere are some reports\\u2026.looks like human\\u2019s dialog\\u201d. Can the authors elaborate on what they mean by this statement?\", \"It is known that if the distribution of Abot is not fixed during RL, Qbot and Abot can make their own language which is not compatible to natural language (Kottur et al. (2017)). To prevent this problem, many studies added the objective function of language model during RL (Zhu et al. (2017); Das et al. (2017b)). However, Chattopadhyay et al. (2017) reported that fine-tuning both Abot and Qbot with the objective function of RL and language model degrades the communication performance of Abot with human, compared to the pre-trained SL model. According to Lee et al. (2018), this problem comes from the phenomenon that Qbot follows the distribution of Abot implicitly in RL learning. They argue that if the distribution of Abot is changed and becomes far from that of human, then the distribution of Qbot also becomes different from human\\u2019s distribution, making the communication performance with human worse. We revised some descriptions of the part the reviewer mentioned.\", \"Q. The benchmarking performance of RL over SL in PyTorch implementation is not accurate because of inherent bugs in the implementation of REINFORCE.\", \"Thank you for letting us know. We will take this issue into account in our research.\", \"Nevertheless, we think this issue on bug does not significantly affect our arguments in the paper. We also compared our algorithm with the results of the original paper (non-delta setting), in not only main experiments but also ablation studies (no caption experiment, QAC experiment). We also conducted additional ablation studies mainly on the non-delta setting in the revision. The scores of SL and RL algorithms under non-delta setting in the paper come from the paper of Das et al. (2017b).\"]}",
"{\"title\": \"For All Reviewers\", \"comment\": \"We appreciate all reviewers for constructive feedback and comments for the improvement of the paper.\\n\\nThe main contribution of AQM+ lies in its ability to effectively deal with general and complicated task-oriented dialogs where the space of all possible guesses, questions, and answers cannot be tractably enumerated, and thus the previous AQM model is not applicable. Addressing this issue is critical for practical usages on real-world dialogs.\\nThat said, we agree with the feedback of the reviewers that more comparative studies between AQM and AQM+ would be necessary. Hence, to enable AQM to be computationally tractable for GuessWhich, we defined several separated AQM-based baselines by replacing each of the components of AQM+ with that of the previous AQM and then compared the result with AQM+.\\n\\nAnonReviewer2 (AnonR2) summarized the major departures from the AQM approach claimed in our paper as: \\n [1] The generation of candidate questions through beam search rather than predefined set\\n [2.1] The approximate answerer being an RNN generating free-form language instead of a binary classifier.\\n [2.2] Dropping the assumption that \\\\tilde p(a_t | c, q_t) = \\\\tilde p (a_t | c, q_t, h_{t-1}).\\n [3] Estimate approximate information gain using subsets of the class and answer space corresponding to the beam-search generated question set and their corresponding answers.\\n\\n- For [1], we performed experiments under the setting where Q_fix, a predefined candidate question set, comes from the generated questions of the SL model. The baseline method with this Q_fix showed significant accuracy degradation (90.76% at |Q|=20 and 92.50% at |Q|=100, indA, non-delta, |A|=|C|=20, and the 10th round) compared to AQM+ (94.64% at |Q|=20). It seems that, in this setting, similar candidate questions highly related to the caption are generated. It results in making the candidate set of questions semantically overlap, and thus degrades the performance. Note that this setting performed even worse than Guesser baseline (92.85%). The result is illustrated in Figure 10.\\n- For [1], similar to the above ablation study, we also performed experiments under the setting where Q_fix comes from the training data. The baseline method with this Q_fix showed accuracy degradation (92.79% at indA, non-delta, and the 10th round) compared to AQM+ (94.64%). The result is illustrated in Figure 11. It is noticeable that this setting retrieves questions which are not relevant to the caption nor the target image as can be seen in Figure 13.\\n- For [2.1] and [3], we conducted experiments under the setting where the candidate answers are randomly selected from the training data and then fixed. The performance was decreased (from 94.64% to 92.78% at indA, non-delta, and the 10th round). The result is illustrated in Figure 4 (b).\\n- For [2.2], we carried out history ablation experiments where the model does not consider the dialog history. The history ablation slightly decreased the performance (from 94.64% to 94.42% at indA, non-delta, and the 10th round). The result is illustrated in Figure 12.\\n- For further analysis of AQM+, we investigated how much each of |Q|, |A|, and |C| affects the performance. Decreasing |C| from 20 to 5 decreased the performance from 94.64% to 94.36% at indA, non-delta, |A|=|Q|=20, and the 10th round. We also conducted similar experiments for |Q| and |A|. |Q| affects PMR more, whereas |A| affects PMR relatively less. The results are illustrated in Figure 5.\\n- As requested by AnonR3, we conducted experiments under the setting where AQM+\\u2019s Qpost and SL\\u2019s Qscore are used as the question-generator and the guesser of the model, respectively. AQM+\\u2019s Qpost did not increase the performance of SL\\u2019s Qscore. It seems that not dialog history but caption information gives dominant information to SL\\u2019s guesser. The result is illustrated in Figure 9.\\n\\nWe conducted proofreading. We added tables for terminology to increase the readability. Regarding the issue raised by AnonR1, we revised some descriptions in the introduction section to distinguish between discriminative and generative dialog systems. We also added explanations for the concerns that the reviewers have. Additional improvement on the writing quality and proofreading for the revisions in the rebuttal period will be made until 23 Nov.\"}",
"{\"title\": \"Contributions seem incremental and concerns regarding the formulated approach\", \"review\": \"The paper proposes an improvement over the AQM approach for an information-theoretic framework for task-oriented dialog systems. Specifically, the paper tries to circumvent the problem of explicitly calculating the information gain while asking a question in the AQM setting. While the original AQM approach sweeps over all possible guesses and answers while estimating information gain, this is rendered impractical in scenarios where this space cannot be tractably enumerated. As a solution, AQM+ proposes sweeping over only some top-k relevant instantiations of answers and guesses in this space by normalizing the probabilities of the subset of the space in consideration. In addition, unlike AQM, AQM+ can ask questions which are relevant to the dialog context so far. Consequentially, this is generalizable and applicable for dialog systems with non \\u2018yes/no\\u2019 answers. Empirical observations demonstrate improvements over the existing approaches for such task-oriented dialog systems. The paper is not very well-written and at times is hard to understand. The contributions seem incremental as well in addition to the concerns mentioned below.\", \"comments\": [\"The paper is overloaded with notations and the writing is not very smooth. The terse nature of the content makes it hard to follow in general. If someone apriori was not familiar with task-oriented dialog or the visual dialog setting in Das et al. (2017b), it would be quite hard to follow.\", \"While mentioning SL/RL approaches while comparing or introducing the setup, the authors do not make any distinction between discriminative and generative dialog models. Specifically, SL approaches could either be trained discriminatively to rank options among the provided ones given dialog context or in a generative manner via token-level teacher forcing. The authors should clearly make this distinction in the introduction and in other places where it\\u2019s needed.\", \"The authors should stress more upon the approximations involved while calculating mutual information. As far as I understand, even in the AQM approach the numerator and the denominator within the logarithm are estimated from a different set of parameters and as such they need not be consistent with each other under marginalization. The term resembles MI and ensuring consistency in such a framework would require either of the numerator or the denominator to be close to something like a variational approximation of the true distribution. In addition, AQM+ adopts the same framework as AQM but computes MI over some top-k of the random variables being considered. Could the authors comment more on why restricting the space of r.v.\\u2019s to some top-k samples is a good idea? Would that not lead to somewhat of a biased estimator?\", \"Unless I am missing something, training aprxAgen from the training data (indA) seems odd. Assuming, this to be Qbot\\u2019s mental model of Abot -- there is no prior reason why this should be initialized or trained in such a manner. Similarly, the training paradigm of the depA setting is confusing. If they are trained in a manner similar to a regular Abot -- either SL or RL -- then they\\u2019re not approximate mental models but are rather just another Abot agent in play which is being queried by Qbot.\", \"Under Comparative Models, in paragraph 2 of section 4.1, the authors state that \\u201cthere are some reports\\u2026.looks like human\\u2019s dialog\\u201d. Can the authors elaborate on what they mean by this statement? It\\u2019s not clear what the message to be conveyed here is.\", \"Comparisons in GuessWhich highly rely on the PyTorch implementation in the mentioned github repository. However, the benchmarking performed in that repository for RL over SL is not accurate because of inherent bugs in the implementation of REINFORCE (see https://github.com/batra-mlp-lab/visdial-rl/issues/13 and https://github.com/batra-mlp-lab/visdial-rl/pull/12 ). I would suggest the authors to take this into account.\", \"Can the authors also show performances for the GuessWhich models (under the AQM+ framework) on the original retrieval metrics for Visual Dialog mentioned in Das et al. (2017a)? This would be useful to judge the robustness of the proposed approach over the methods being compared with.\", \"Updated Thoughts\", \"The authors adressed the issues raised/comments made in the review. In light of my comments below to the author responses -- I am inclined towards increasing my rating.\", \"In addition, I have mentioned some updates in the comments which might make the paper stronger -- centered around clarifications regarding the computation of the top-k info-gain term.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper addresses the important limitation of the prior work and improves the generalization of the model.\", \"review\": \"The goal of this paper is to build a task-oriented dialogue generation system that can continuously generate questions and make a guess about the selected object.\\n\\nThis paper builds on the top of the previously proposed AQM algorithm and focuses on addressing the limitation of the AQM algorithm, which chooses the question that maximizes mutual information of the class and the current answer, but uses fixed sets of candidate questions/answers/classes.\\nThe proposed AQM+, the extension of AQM, is to deal with 1) the natural language questions / answers using RNN as the generator instead of selecting from the candidate pool (RNN as generator) and 2) a large set of candidate classes (from 10 to 9628). \\nThe novelty is relatively limited, considering that the model is revised from AQM.\\nAlthough this work is incremental, this paper addresses the important issue about the generalization.\\n\\nThe experiments show that the model achieves good performance in the experiments.\\nHowever, some questions should be clarified.\\n\\n1) In the ablation study, what is the performance of removing Qpost and remaining Qinfo (asking questions using AQM+, and guessing with an SL-trained model)?\\n\\n2) In the experiments, the baselines do not contain AQM. \\nAlthough AQM has more constraints, it is necessary to see the performance difference between AQM and AQM+, . \\nIf the difference is not significant, it means that this dataset cannot test the generalization capability of the model, so experiments on other datasets may be considered.\\nIf the difference is significant, then the effectiveness of the model is well justified.\\nThe authors should include the comparison in the experiments; otherwise, it is difficult to justify whether the proposed model is useful.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review\", \"review\": \"=================\\nUpdated Thoughts\\n=================\\n\\nI was primarily concerned about a lack of analysis regarding the technical contributions moving from AQM to AQM+. The revisions and author comments here have addressed the specific experiments I've asked for and more generally clarified the contributions made as part of AQM+. I've increased my rating to reflect my increased confidence in this paper. Overall, I think this is a good paper and will be interesting to the community.\\n\\nI also thank the authors for their substantial efforts to revise the paper and address these concerns.\\n\\n\\n===========\", \"strengths\": \"===========\\n\\nThe approach is a sensible application of AQM to the GuessWhich setting and results in significant improvements over existing approaches both in terms of quantitative results and qualitative examples. \\n\\n===========\", \"concerns\": \"===========\\n\\n[A] Technical Novelty is Limited Compared to AQM \\nThe major departures from the AQM approach claimed in the paper (Section 3.3) are:\\n\\t[1] the generation of candidate questions through beam search rather than predefined set \\n\\t[2.1] The approximate answerer being an RNN generating free-form language instead of a binary classifier. \\n\\t[2.2] Dropping the assumption that \\\\tilde p(a_t | c, q_t) = \\\\tilde p (a_t | c, q_t, h_{t-1}). \\n\\t[3] Estimate approximate information gain using subsets of the class and answer space corresponding to the beam-search generated question set and their corresponding answers.\", \"i_have_some_concerns_about_these\": \"For [1], the original AQM paper explores this exact setting for GuessWhat in Section 5.2 -- generating the top-100 questions from a pretrained RNN question generator via beam search and ranking them based on information gain. From my understand, this aspect of the approach is not novel.\\n\\nFor [2.1] I disagree that this is a departure from the AQM approach, instead simply an artifact of the experimental setting. The original AQM paper was based in the GuessWhat game in which the answerer could only reply with yes/no/na; however, the method itself is agnostic of this choice. In fact, the detailed algorithm explanation in Appendix A of the AQM paper explicitly discusses the possibility of the answer generator being an RNN model. \\n\\nGenerally, the modifications to AQM largely seem like necessary, straight-forward adjustments to the problem setting of GuessWhich and not algorithmic advances. That said, the changes make sense and do adapt the method to this more complex setting where it performs quite well!\\n\\n\\n[B] Design decisions are not well justified experimentally\\nGiven that the proposed changes seem rather minor, it would be good to see strong analysis of their effect. Looking back at the claimed difference from AQM, there appear to be a few ablations missing:\\n- How useful is generating questions? I would have liked to see a comparison to a Q_fix set samples from training. (This corresponds to difference [1] above.)\\n- How important is dialog history to the aprxAns model? (This corresponds to difference [2.2] above).\\n- How important is the choice to restrict to |C| classes? Figure 4b begins to study this question but conflates the experiment by simultaneously increasing |Q| and |A|. (This correspond to difference [3] above.)\\n\\n[C] No evaluation of Visual Dialog metrics\\nIt would be useful to the community to see if this marked improvement in GuessWhich performance also results in improved ability to predict human response to novel dialogs. I (and I imagine many others) would like to see evaluation on the standard Visual Dialog test metrics. If this introspective inference process improves these metrics, it would significantly strengthen the paper!\\n\\n[D] No discussion of inference time\\nIt would be useful to include discussion of relative inference time. The AQM framework requires substantially more computation than an non-introspective model. Could authors report this relative increase in inference efficiency (say at K=20)? \\n\\n\\n[E] Lack of Comparison to Base AQM\\nI would expect explicit comparison to AQM for a model named AQM+ or a discussion on why this is not possible.\\n\\n\\n===========\", \"minor_things\": [\"===========\", \"I don't understand the 2nd claimed contribution from the introduction \\\"At every turn, AQM+ generates a question considering the context of the previous dialog, which is desirable in practice.\\\" Is this claim because the aprxAns module uses history?\", \"Review versions of papers often lack polished writing. I encourage the authors to review their manuscript for future versions with an eye for clarity of terminology, even if it means a departure from established notation in prior work.\", \"The RL-QA qualitative results, are these from non-delta or delta? Is there a difference between the two in terms of interpretability?\", \"===========\"], \"overview\": \"===========\\n\\nThe modifications made to adapt AQM to the GuessWhich setting presented here as AQM+ seem to be somewhat minor technical contributions. Further, where these difference could be explored in greater detail, there is a lack of analysis. That said, the proposed approach does make significant qualitative and quantitative improvements in the target problem. I'm fairly on the fence for this paper and look forward to seeing additional analysis and the opinions of other reviewers.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HJe62s09tX | Unsupervised Hyper-alignment for Multilingual Word Embeddings | [
"Jean Alaux",
"Edouard Grave",
"Marco Cuturi",
"Armand Joulin"
] | We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This paper extends this line of work to the problem of aligning multiple languages to a common space. A solution is to independently map all languages to a pivot language. Unfortunately, this degrades the quality of indirect word translation. We thus propose a novel formulation that ensures composable mappings, leading to better alignments. We evaluate our method by jointly aligning word vectors in eleven languages, showing consistent improvement with indirect mappings while maintaining competitive performance on direct word translation. | [
"problem",
"multiple languages",
"common space",
"languages",
"unsupervised",
"multilingual word embeddings",
"multilingual word",
"continuous word representations",
"case",
"possible"
] | https://openreview.net/pdf?id=HJe62s09tX | https://openreview.net/forum?id=HJe62s09tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1goFVggx4",
"ryllvPPg0X",
"HketImInnX",
"Bke2ofhw37",
"SyesiPQrhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544713347493,
1542645591772,
1541329744986,
1541026467799,
1540859810586
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper754/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper754/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper754/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper754/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper754/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides a simple and intuitive method for learning multilingual word embeddings that makes it possible to softly encourage the model to align the spaces of non-English language pairs. The results are better than learning just pairwise embeddings with English.\\n\\nThe main remaining concern (in my mind) after the author response is that the method is less accurate empirically than Chen and Cardie (2018). I think however that given that these two works are largely contemporaneous, the methods are appreciably different, and the proposed method also has advantages with respect to speed, that the paper here is still a reasonably candidate for acceptance at ICLR.\\n\\nHowever, I would like to request that in the final version the authors feature Chen and Cardie (2018) more prominently in the introduction and discuss the theoretical and empirical differences between the two methods. This will make sure that readers get the full picture of the two works and understand their relative differences and advantages/disadvantages.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Simple and effective method, accuracy worse but speed better than contemporaneous work.\"}",
"{\"title\": \"Reviewers: Please take a look at the author response\", \"comment\": \"Dear reviewers: could you please take a look at the author response? I think it is comprehensive, and could very well address some of the concerns expressed in the original reviews. I'd appreciate any additional feedback or discussion, which would help write the final review of the paper.\"}",
"{\"title\": \"Need some more clarifications on the experiments\", \"review\": [\"The authors present a method for unsupervised alignment of word across multiple languages. In particular, they extend an existing unsupervised bilingual alignment to the case of multiple languages by adding constraints to the optimization problem. The main aim is to ensure that the embeddings can now be composed and the performance (alignment quality) does not degrade across multiple compositions.\", \"Strengths\", \"Very clearly written\", \"A nice overview of existing methods and correct positioning of the author's contributions in the context of these works\", \"A good experimental setup involving multiple languages\", \"Weaknesses\", \"I am not sure how to interpret the results in Table 2 and Table 3 (see questions below).\", \"Questions\", \"On page 7 you have mentioned that \\\"this setting is unfair to the MST baseline, since ....\\\" Can you please elaborate on this? I am not sure I understand this correctly.\", \"Regarding results in Table 2 and 3: It seems that there is a trade-off while adding constraints which results in poor bilingual translation quality. I am not sure is this is acceptable. I understand that your goal is to do indirect translation but does that mean we should ignore direct translation ?\", \"In Table 3 can you report both W-Proc and W-Proc* results ? Is it possible that the GW-initialization helps bilingual translation as the performance of W-Proc* is clearly better than W-Proc in Table 2. However, could it be the case that this somehow affects the performance in the indirect translation case? IMO, this is worth confirming.\", \"In Table 3, you are reporting average accuracies across and within families. I would like to see the numbers for all language pairs independently. This is important because when you consider the average it is quite likely that for some language pair the numbers were much higher which tilts the average in favor of some approach. Also looking at the individual numbers will help us get some insights into the behavior across language pairs.\", \"In the motivation (Figure 1) it was mentioned that compositions can be done (and are often desirable) along longer paths (En-Fr-Ru-It). However, in the final experiments the composition is only along a triplet (X-En-Y). Is that correct or did I misinterpret the results? If so, can you report the results when the number of compositions increases?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting paper that needs more work, more examples and more motivation.\", \"review\": \"This paper is concerned with the idea of inducing multilingual word embeddings (i.e., word vector spaces where words from more than two languages are represented) in an unsupervised way using a mapping-based approach. The main novelty of the work is a method, inspired by recent work of Nakashole and Flauger, and building on the unsupervised bilingual framework of Grave et al., which aims at bypassing the straightforward idea of independently mapping N-1 vector spaces to the N-th pivot space by adding constraints to ensure that the learned mappings can be composed (btw., it is not clear from the abstract what this means exactly).\\n\\nIn summary, this is an interesting paper, but my impression is that it needs more work to distinguish itself from prior work and stress the contribution more clearly. \\n \\nAlthough 11 languages are used in evaluation, the authors still limit the evaluation only to (arguably) very similar languages (all languages are Indo-European and there are no outliers, distant languages or languages from other families at all, not even the usual suspects like Finnish and Hungarian). Given the observed instability of GAN-based unsupervised bilingual embedding learning, dissected in Sogaard et al.'s paper (ACL 2018) and also touched upon in the work of Artetxe et al. (ACL 2018), one of the critical questions for this work should also be: is the proposed method stable? What are the (in)stability criteria? When does the method fail and can it lead to sub-optimal solutions? What is the decrease in performance when moving to a more distant language like Finnish, Hungarian, or Turkish? Is the method more robust than GAN-based models? All this has to be at least discussed in the paper.\", \"another_question_is\": \"do we really want to go 'fully unsupervised' given that even a light and cheap source of supervision (e.g., shared numerals, cognates) can already result in more robust solutions? See the work of Artetxe et al. (ACL 2017, ACL 2018), Vulic and Korhonen (ACL 2016) or Sogaard et al. (ACL 2018) for some analyses on how the amount of bilingual supervision can yield more (or less) robust models? Is the proposed framework also applicable in weakly-supervised settings? Can such settings with weak supervision guarantee increased robustness (and maybe even better performance)? I have to be convinced more strongly: why do we need fully unsupervised multilingual models, especially when evaluation is conducted only with resource-rich languages?\", \"another_straightforward_question_is\": \"can the proposed framework handle cases where there exists supervision for some language pairs while other pairs lack supervision? How would the proposed framework adapt to such scenarios? This might be an interesting point to discuss further in Section 5.\", \"style_and_terminology\": \"it is not immediately clear what is meant by (triplet) constraints (which is one of the central terms in the whole work). It is also not immediately clear what is meant by composed mappings, hyper-alignment (before Section 4), etc. There is also some confusion regarding the term alignment as it can define mappings between monolingual word embedding spaces as well as word-level links/alignments. Perhaps, using mapping instead of alignment might make the description more clear. In either case, I suggest to clearly define the key concepts for the paper. Also, the paper would contribute immensely from some running examples illustrating the main ideas (and maybe an illustrative figure similar to the ones presented in, e.g., Conneau et al.'s work or Lample et al.'s work). The paper concerns word translation and cross-lingual word embeddings, and there isn't a single example that serves to clarify the main intuition and lead the reader through the paper. The paper is perhaps too much focused on the technical execution of the idea to my own liking, forgetting to motivate the bigger picture.\", \"other\": \"the part on \\\"Language tree\\\" prior to \\\"Conclusion\\\" is not useful at all and does not contribute to the overall discussion. This could be safely removed and the space in the paper should be used to additional comparisons with more baselines (see above for some baselines).\\n\\nThe authors mention that their approach is \\\"relatively hard to scale\\\" only in their conclusion, while algorithmic complexity remains one of the key questions related to this work. I would like to see some quantitative (time) measurements related to the scaling problem, and a more thorough explanation why the method is hard to scale. The complexity and non-scalability of the method was one of my main concerns while reading the paper and I am puzzled to see some remarks on this aspect only at the very end of the paper. Going back to algorithmic complexity, I think that this is a very important aspect of the method to discuss explicitly. The authors should provide, e.g., O-notation complexity for the three variant models from Figure 2 and help the reader understand pros and cons of each design also when it comes to their design complexity. Is the only reason to move from the star model to the HUG model computational complexity? This argument has to be stressed more strongly in the paper.\\n\\nTwo very relevant papers have not be cited nor compared against. The work of Artetxe et al. (ACL 2018) is an unsupervised bilingual word embedding model similar to the MUSE model of Conneau et al. (ICLR 2018) which seems more robust when applied on distant languages. Again, going back to my previous comment, I would like to see how well HUG fares in such more challenging settings. Further, a recent work of Chen and Cardie (EMNLP 2018) is a multilingual extension of the bilingual GAN-based model of Conneau et al. Given that the main goal of this work and Chen and Cardie's work is the same: obtaining multilingual word embeddings, I wonder how the two approHowaches compare to each other. Another, more general comment concerns the actual evaluation task: as prior work, it seems that the authors optimise and evaluate their embeddings solely on the (intrinsic) word translation task, but if the main goal of this research is to boost downstream tasks in low-resource languages, I would expect additional evaluation tasks beyond word translation to make the paper more complete and convincing.\\n\\nThe method relies on a wide spectrum of hyper-parameters. How are these hyper-parameters set? How sensitive is the method to different hparams configurations? For instance, why is the Gromov-Wasserstein approach applied only to the first 2k vectors? How are the learning rate and the batch size determined?\", \"minor\": \"What is W in line 5 of Algorithm 1?\\nGiven the large number of symbols used in the paper, maybe a table of symbols put somewhere at the beginning of the paper would make the paper easier and more pleasant to read.\", \"i_would_also_compare_the_work_to_another_relevant_supervised_baseline\": \"the work from Smith et al. (ICLR 2017). This comparison might further strengthen the main claim of the paper that indirect translations can also be found without degrading performance in multilingual embedding spaces.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good work for multilingual embedding alignment\", \"review\": \"This is a work regarding the alignment of word embedding for multiple languages.Though there are existing works similar to this one, most of them are only considering a pair of two languages, resulting in the composition issue mentioned in this work. The authors proposed a way of using a regularization term to reduce such degraded accuracy and demonstrate the validity of the proposed algorithm via experiments. I find the work to be interesting and well written. Several points that I want to bring up:\\n\\n1. The language tree at the end of section 5 is very interesting. Does it change if the initialization/parameter is different?\\n\\n2. The matrix P in (1) is simply a standard permutation matrix. I think the definitions are redundant.\\n\\n3. The experiment results are expected since the algorithms are designed for better composition quality. An additional experiment, e.g. classification of instances in multiple languages, could further help demonstrate the strength of the proposed technic.\\n\\n4. How to choose the regularization parameter \\\\mu and what's the effect of \\\\mu?\\n\\n5. Some written issues like the notation of orthogonal matrix set, both \\\\mathcal{O} and \\\\mathbb{O} are used.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SJx63jRqFm | Diversity is All You Need: Learning Skills without a Reward Function | [
"Benjamin Eysenbach",
"Abhishek Gupta",
"Julian Ibarz",
"Sergey Levine"
] | Intelligent creatures can explore their environments and learn useful skills without supervision.
In this paper, we propose ``Diversity is All You Need''(DIAYN), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning. | [
"reinforcement learning",
"unsupervised learning",
"skill discovery"
] | https://openreview.net/pdf?id=SJx63jRqFm | https://openreview.net/forum?id=SJx63jRqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyemovEZlV",
"SkgRuY_lCQ",
"r1xu8tdlR7",
"rJxPVKde0X",
"BkhzbQF1a7",
"HJeAJln337",
"HJxA5jvqnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544796058897,
1542650230496,
1542650192059,
1542650158796,
1541538554078,
1541353446009,
1541204886065
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper752/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper752/Authors"
],
[
"ICLR.cc/2019/Conference/Paper752/Authors"
],
[
"ICLR.cc/2019/Conference/Paper752/Authors"
],
[
"ICLR.cc/2019/Conference/Paper752/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper752/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper752/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"There is consensus among the reviewer that this is a good paper. It is a bit incremental compared to Gregor et al 2016. This paper show quite better empirical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting and well-executed paper\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks for all the feedback!\\n\\nLearning p(z): We would like that emphasize that prior work (VIC [Gregor et al]) also requires the user to choose the number of skills. Choosing this parameter is analogous to choosing K in K-means. While we can propose various heuristics, the right choice ultimately depends on the problem at hand. Empirically, we found that a 50-dimensional categorical distribution worked well in all environments we tested. Your question hints at a deeper question: can we recursively or hierarchically compose skills to learn behaviors of increasing complexity? While this is beyond the scope of this work, we think it is an exciting direction for future research, and encourage work in this direction.\\n\\n\\\"Useless skills\\\": We agree that without explicitly biasing our skills towards certain types of behaviors, we are likely to end up with many skills that are not useful for a particular given task. We presented one way to bias skill discovery in Section 4.2.2 (Question 7), and showed experimentally that it enabled better performance on hierarchical tasks (Figure 8 (right), purple line). We should emphasize that including a task-specific biases skill may cause poor generalization across tasks.\", \"lfd_and_hrl_baselines\": \"We will run additional experiments comparing to imitation learning and hierarchical RL baselines in the final version.\\n\\nFor Figure 8, we plotted over time instead of iterations because TRPO and SAC have very different costs per iteration (TRPO is substantially most costly). We will include a plot over time in the final paper, but caution that TRPO will be run for many fewer iterations than SAC. We'll also fix the typos and clarifications you've suggested.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks for your helpful comments!\", \"dimension_of_z\": \"The dimension of the skill indicator, z, is a hyperparameter. If the dimension is too small, then you cannot learn many skills. If the dimension is too large, many skills are not distinguishable. While we found that a 50 dimensional categorical distribution worked well for all environments we tested, the right choice ultimately depends on the problem at hand.\\n\\nLearning p(z): To answer your questions about learning large numbers of skills, we repeated the experiment in Figure 4, varying the number of skills from 10 to 300. Results are now included in Appendix E, Figure 18. We found increasing the number of skills does not solve the \\\"rich get richer\\\" problem. Even when learning 300 skills, the effective number of distinguishable skills quickly drops to be less than 10.\\n\\nWhy not condition on actions? We agree that it would be \\\"easier\\\" to maximize our mutual information objective if we conditioned the discriminator on both states and actions, rather than just states. In fact, the data processing inequality tells us this: the mutual information between (state, action) pairs and latents I(S, A; Z) is an upper bound for the mutual information between states and latents I(S; Z). However, the ease of taking discriminable actions is precisely the problem with this objective. In preliminary experiments, we found that conditioning the discriminator on both states and actions substantially increased the number skills that were distinguishable in action space but not state space, effectively maximizing I(A; Z). This result is problematic because, for skills to be useful for downstream tasks, they must consistently change the state.\\n\\nWe have fixed the typos and reworded the method section to be more accessible to readers unfamiliar with SAC.\"}",
"{\"title\": \"Author response\", \"comment\": \"Thanks for your comments! We are currently running the requested comparison to VIC in the hierarchical RL experiments, and will update the paper once completed. We have also corrected the typo that you mentioned.\"}",
"{\"title\": \"Clearly-written, well-demonstrated learning and application of unsupervised skills via information-theoretic / entropic methods\", \"review\": \"The authors propose a learning scheme for the unsupervised acquisition of skills. These skills are then applied to (1) accelerate reinforcement learning to maximize a reward, (2) perform hierarchical RL, and (3) imitate an expert trajectory.\\n\\nThe unsupervised learning of skills maximizes an information theoretic objective function. The authors condition their policy on latent variable Z, and the term skill refers to a policy conditioned on a fixed Z. The mutual information between states and skills is maximized to ensure that skills control the states, while the mutual information between actions and skills given the state is minimized, to ensure that states, not actions distinguish skills. The entropy of the mixture of policies is also maximized. Further manipulations on this objective function enable the scheme to be implemented using a soft actor-critic maximizing a pseudo reward involving a learned skill discriminator.\\n\\nThe authors clearly position their work in relation to others, and especially point out the differences to the most similar work, namely Gregor et al 2016. These differences while seemingly minor end up providing exceptional improvement in the number of skills learned and the domains tackled.\\n\\nThe question-answer style is somewhat unconventional. While the content comes across clearly, the flow / narrative is a bit broken.\\n\\nOverall, I believe that applicability of the work is very wide, touching inverse RL, hierarchical RL, imitation learning, and more. The simulational comparisons are also very useful.\\n\\nHowever, there is an issue that I'd like to see addressed:\", \"fig_8\": \"In a high-dimensional task, namely 111D ant navigation, DIAYN performs slightly worse than others. Incorporating a prior on useful skills makes DIAYN perform much better. Here, apart from the comparision with other state of the art RL methods, the authors should also compare to VIC. Indeed one of the key differences to VIC was the uniform prior on skills, which the authors now break albeit in a slightly different way. Thus, it is essential to also show the performance of VIC, and comment on any similarities / differences. The relation of this prior to the VIC prior should also be made clear. Further, the performance of VIC on the half cheetah hurdle should be also be shown.\\n\\nIf the above issue is addressed, I strongly recommend that the work be presented at ICLR.\\n\\nMinor issues / typos:\", \"pg_1\": \"\\\"policy that alters that state of the environment\\\" to \\\"policy that alters the state of the environment\\\"\", \"pg_3\": \"\\\"mutual information between skills and states, I(A; Z)\\\" to \\\"mutual information between skills and states, I(S; Z)\\\"\", \"pg_4\": \"\\\" soft actor critic\\\" to \\\" soft actor critic (SAC)\\\" since SAC is used later.\", \"pg_5\": \"full form of VIME not introduced\", \"fig_5\": \"would be good to also show the variance as a shaded area around the mean.\", \"pg_7\": \"\\\"whereas DIAYN explicitly skills that effectively partition the state space\\\" ?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review - Diversity is All You Need: Learning Skills without a Reward Function\", \"review\": [\"*Pros:*\", \"Mostly clear and well-written paper\", \"Technical contribution (learning many diverse skills) is principled and well-justified (although the basic idea builds on a large body of prior work from related fields as also evidenced from the reference list)\", \"Extensive emperical evaluation on multiple tasks and different scenarios (mainly toy examples) showing promising results.\", \"*Cons:*\", \"The main paper assumes detailed knowledge of the actor critic setup to fully follow and appreciate the paper (a few details provided in the appendix)\", \"p(z): it is not entirely clear to me how the dimensionality of z should be chosen in a principled manner aside from brute force evaluation (as in 4.2.2; which does not go beyond a few hundreds). What happens for many skills and would learning p(z) be preferable in this scenario?\", \"Note: The work has been in the public domain for some time thereby limiting the apparent novelty. This has not influenced my decision as per ICLR policy.\", \"*Significance*: I think this work would be of interest to the ICLR crowd despite it having been in the public domain for a some time. It provides a simple objective for training RL models in an unsupervised manner by learning multiple diverse skills and contributes with an extensive and convincing empirical evaluation which will surely have a lasting impact in the RL subfield.\", \"*Further comments/questions*:\", \"The authors assume that only states and not actions are observable. Intuitively, it would seem easier to obtain the desired results if the actions are also available. Could the authors perhaps clarify why it is reasonable to assume that the actions are not observable to the planner when evaluating the objective in Eq 1? Similarly, I\\u2019d like some insight into the behaviour of the proposed method if actions are also available (and how it differs from prior art in this case)?\", \"I\\u2019d suggest enforcing consistently in the way variation across random seeds is visualised in the figures (e.g. traces in fig 4, no indication in fig5, shaped background in fig 6).\", \"I\\u2019d suggest making it explicit what $\\\\theta$ refers to in Eq. 1 (and provide some details about the SAC setup for completeness, as previously mentioned)\", \"Minor typos etc: {p2, l6} missing word, \\u201cSAC\\u201d never defined, \\u201cDOF\\u201d never defined, and a few other typos/punctuation issues throughout.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Strong paper with interesting contributions, but with slightly over-stated claims\", \"review\": \"This paper proposes a method for learning skills in absence of a reward function. These skills are learned so that the diversity of the trajectories produced by each skill is maximised. This is achieved by having a discriminator attempting to tell these skills apart. The agent is rewarded for visiting states that are easy to distinguish and the discriminator is trained to better infer the skills from states visited by the agent. Furthermore, a maximum entropy policy is used to force the skills to be diverse. The proposed method is general and any RL algorithm with entropy maximisation in the objective can be used, the implementation in the paper uses the Soft Actor Critic method.\\n\\nThe problem that they are tackling is interesting and is of clear value for obtaining more generalisable RL algorithms. The paper is overall clear and easy to follow, the results are interesting and potentially useful, although I have some reservations regarding how they assess this usefulness in the current version of this paper.\\nStructure-wise, I would say that the choice of writing the paper in the form of a Q&A, with very brief explanations and details was more distracting and at times unnecessary than I liked (e.g. Question 7 could move to Appendix as it is quite trivial).\\n\\nI really appreciated how much care has been taken to discuss differences with the closest prior work, Variational Intrinsic Control (VIC) by Gregor et al. \\nOne such difference is that their prior distribution over skills is not learnt. While there are good arguments by the authors about why this is appealing (e.g. it prevents collapsing to sampling only a few skills), I feel this could be also quite a limitation of their method. This assumes that you have a good a-priori knowledge and assumptions regarding how many skills are useful or needed in the environment. This is unlikely to be the case in complex environments, where you first need to learn simple skills in order to explore the environment, and later learn to form new more complex skills. During this process, you might want to prune simplistic skills after you learnt more abstract and complex ones, for instance in the context of continual learning. I understand this could be investigated in future work, but I feel they take a rather optimistic take on this problem.\\n\\nOverall, the use case for the proposed method is slightly unclear to me. While the paper claims to allow diverse set of skills to be learnt, it is highly dependent on learning varied action sequences that help you visit different part of state space, regardless of their usefulness. This means there could be learn a lot of skills that capture part of the state space that is not useful or desirable for downstream tasks. While there is a case made for DIAYN being a stepping stone for imitation learning and hierarchical RL, I don\\u2019t find the reported experiments for imitation learning and HRL convincing. In the imitation learning experiment, the distance (KL divergence) between all skills and the expert data is computed and the closest skill is then chosen as the policy imitating the expert. The results are weak and no comparisons with any LfD baselines are reported. The HRL experiments also lack comparisons to any other HRL baseline. I feel that this section is rather weak, especially compared to the rest of the paper, and I am not sure it achieves much.\\n\\nAs a general comment, the choice of reporting the training progress using \\u201chours spent training\\u201d is an peculiar choice which is never discussed. I understand that for methods with varying computational costs this might be a fairer comparison but it would be perhaps good to also report progress against number of required environment interactions (including pre-training).\\nAnother assumption made is that the method is valuable in situations where the reward function is expensive to compute and the unsupervised pre-training is free (somewhat easing the large amount of pre-training required). However, it would have been interesting to see examples of such environments in their experiments supporting these claims, as this assumption is not valid for the chosen MuJoCo environments.\\n\\nDespite these comments, I still feel this is valuable work, that can clearly inspire further relevant work and deserves to be presented at ICLR.\\nIt presents a solid contribution, given its technical novelty, proposed applications and its overall generality. \\nHowever, the paper could use more convincing experiments to support its claims.\", \"additional_comments_and_typos\": [\"Figure 5 lack error bars across the 5 random seeds and are crucial to assess whether this performance difference is indeed significant given the amount of pre-training required.\", \"Figure 7\\u2019s title and caption is missing...\", \"typo: page 3, last paragraph \\u201c...mutual information between skills and states, **I(S; Z )**\\u201d not I(A; Z)\", \"typo: page 7 paragraph next to Figure 6 \\u201c...whereas DIAYN explicitly **learns** skills that effectively partition the state space\\u201d\", \"typo: page 7 above Figure 8 \\u201c...make them **exceedingly** difficult for non- hierarchical RL algorithms.\\u201d\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rygp3iRcF7 | Area Attention | [
"Yang Li",
"Lukasz Kaiser",
"Samy Bengio",
"Si Si"
] | Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection (the memory) where each item has a predefined, fixed granularity, e.g., a character or a word. Intuitively, an area in the memory consisting of multiple items can be worth attending to as a whole. We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences. Importantly, the size of an area, i.e., the number of items in an area or the level of aggregation, is dynamically determined via learning, which can vary depending on the learned coherence of the adjacent items. By giving the model the option to attend to an area of items, instead of only individual items, a model can attend to information with varying granularity. Area attention can work along multi-head attention for attending to multiple areas in the memory. We evaluate area attention on two tasks: neural machine translation (both character and token-level) and image captioning, and improve upon strong (state-of-the-art) baselines in all the cases. These improvements are obtainable with a basic form of area attention that is parameter free. In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables. | [
"Deep Learning",
"attentional mechanisms",
"neural machine translation",
"image captioning"
] | https://openreview.net/pdf?id=rygp3iRcF7 | https://openreview.net/forum?id=rygp3iRcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1gJuct8eV",
"Sye91YISeV",
"BklOdLLreV",
"ryxwGrUrxE",
"B1ga9sDNxN",
"HygTAER-gE",
"rylq26xWeV",
"rkeQcjb9AX",
"S1xGY8W5CQ",
"BJeGDE-9AQ",
"rygBF0ec0X",
"SkxW5QBK3m",
"HygeYLjd27",
"H1gTcC3xnQ",
"S1e0m61nom",
"BJe3HncKoQ",
"r1gjJpJ_oQ",
"ryxUM6rPi7",
"H1lSNXPF57",
"BJlczK-Mcm",
"Sygs7-Y-5X",
"Bkges0dWqX",
"B1eyP2OZc7",
"rJxRSjwg9X",
"H1l0Ups3YQ",
"HJxsDJ2jtQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment"
],
"note_created": [
1545144935319,
1545066722331,
1545066096463,
1545065742589,
1545005972574,
1544836308731,
1544781233575,
1543277450689,
1543276153658,
1543275610357,
1543274108955,
1541129096933,
1541088887995,
1540570772593,
1540255013700,
1540103235855,
1539992803418,
1539951886261,
1539040045221,
1538558225746,
1538523427012,
1538522775936,
1538522199288,
1538452293954,
1538207062139,
1538142051116
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper751/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper751/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"ICLR.cc/2019/Conference/Paper751/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"although the idea is a straightforward extension of the usual (flat) attention mechanism (which is positive), it does show some improvement in a series of experiments done in this submission. the reviewers however found the experimental results to be rather weak and believe that there may be other problems in which the proposed attention mechanism could be better utilized, despite the authors' effort at improving the result further during the rebuttal period. this may be due to a less-than-desirable form the initial submission was in, and when the new version with perhaps a new set of more convincing experiments is reviewed elsewhere, it may be received with a more positive attitude from the reviewers.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"reject\"}",
"{\"title\": \"Thanks for your further comments\", \"comment\": \"While accuracy gains are not always significant, the improvements from area attention are pretty consistently seen across most conditions. While the baseline models were well tuned by previous work, we didn\\u2019t particularly tune each model to better work with area attention. We believe some hyperparameter tuning would result in better accuracy. For example, we acquired some new results (29.74 for En-De and 41.5 for En-Fr) recently by just tuning on the maximum area size. Tuning other hyperparameters such as attention dropout ratio can help realize area attention\\u2019s full potential.\"}",
"{\"title\": \"Thanks for your further comments\", \"comment\": \"Re: Motivation\\nWe agree that attention visualization would be a good way to show, which we should add to the paper. We feel the need to support areas or ranges has been well demonstrated in a collection of previous works that are suggested by Reviewer1 and discussed by us in the Related Work section. Area attention we proposed here provides a simple and effective solution for enabling areas. We will add concrete examples to the paper.\", \"re\": \"Significance\\nOn one hand, we agree that Transformer_big is a strong baseline and our gain is not always significant. On the other hand, we feel area attention provides a simple solution that can be easily applied to many tasks and models that lead to better accuracy. We want to point out that while the baseline models (Transformer_Base & Big) were extensively tuned for the NMT tasks, we did not particularly tune model conditions for area attention. We simply used all the hyperparameters that were tuned for the baselines for area attention experiments. In our recent experiments, by tuning on the maximum area size, we found area attention achieved even better accuracy, e.g., 29.74 for En-De and 41.5 for En-Fr, which enable a larger gain over the baselines. There are other hyperparameters to tune such as attention dropout ratio, which can be crucial for area attention to realize its potential. Please note that the basic form of area attention (Eq.3) requires no additional parameters and the feature combination version of area attention (Eq.5-9) uses very few additional parameters: less than 0.03% for Transformer Big.\"}",
"{\"title\": \"Thanks for your further comments\", \"comment\": \"Thank you for your encouraging remarks. We agree for some conditions our accuracy gain is not as significant. However, the accuracy improvement has been very consistent across most model conditions when area attention is used. We currently simply used the hyperparameters tuned for the baseline models. We believe some tuning of hyperparameters for area attention can result in better accuracy. In our recent experiments, area attention achieved 29.74 for En-De and 41.5 for En-Fr (a larger margin than previously reported in the revision) by simply tuning on the maximum area size. Tuning on other hyperparameters such as attention dropout ratio can further realize its potential.\"}",
"{\"title\": \"thank you for revised paper\", \"comment\": \"I thank the authors for their efforts in revising paper.\\n\\nI think I understand the philosophy behind the area attention. It is possibly a right direction for development of attention tech.\\nAs the other reviewers pointed out, experimental results are remain weak to fully support the necessity of the proposed framework. \\nThus I do not change the review score.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Authors properly addressed all of my questions.\\nI agree that the paper was substantially improved.\\nI understand the authors' effort.\\nHowever, unfortunately, token-level translation experiments did not much support the usefulness of the area attention.\\nIn conclusion, it is a bit hard to \\\"strongly\\\" recommend this paper to be accepted in my opinion.\"}",
"{\"title\": \"Response to the rebuttal\", \"comment\": \"Thanks a lot for your kind response and most of my questions are answered. This version is more solid than the initial submission. However, I still have several concerns for this work:\\n\\n[Motivation]\\nI am not fully convinced by the current motivation. (1) Without area attention, what is the phenomenon/problem of the current neural machine translation and image captioning? How many cases are suffering from the absence of area attention? Can these problems be solved by carefully tuning the network? (2) After introducing area attention, how the problems are solved? The area attention weights should be visualized. Currently, I do not find the visualization, statistics or examples in this paper.\\n\\n\\n[Significance]\\nThe improvement on NMT is too minor under the transformer_big setting. In Table 1, the BLEU score of En-De of regular attention is 29.43, and area attention improves it to 29.68; on En-Fr, the improvement is 0.18, which is not significant. Since the ``area attention\\u2019\\u2019 model needs more parameters, I am not sure whether the improvement is brought by more parameters. Besides, the improvement of en-fr under transformer_base setting is also small (39.10 v.s. 39.28). Therefore, we cannot conclude that area attention really works for NMT.\"}",
"{\"title\": \"Responses to Reviewer3's comments\", \"comment\": \"Thank you for your detailed comments. We have carefully addressed all your questions in the revision. Please see the Summary of Changes and the revised paper for details. We here respond to each of your questions.\", \"re\": \"\\\"What if we use different area size? I do not find the study in this paper.\\\"\\n\\nThis is a great question. We reported the effect of different area sizes for image captioning tasks, e.g., 2x2 versus 3x3. We explored which layers in Transformer can benefit from area attention and found area attention helps more when it\\u2019s used in lower layers. We speculate that this is because each position is well equipped with contextual information due to self attention in latent layers and area attention might not be able to help much. In contrast, the lower layers can benefit a lot from area attention. We also found a large area size does not necessarily improve accuracy. It is worth tuning the max area size as a hyper parameter to achieve even better results for specific problems.\"}",
"{\"title\": \"Responses to Reviewer2's comments\", \"comment\": \"Thank you very much for your feedback. We have revised the paper substantially and here address each of your concerns.\", \"re\": \"\\u201cAttention mechanisms are designed to focus on a single item\\u201d\\nThis is indeed miscommunication in our original submission. What we intend to convey is that regular attention mechanisms are designed to focus on individual items, i.e., the granularity of attention is each item. Such granularity is predetermined and fixed, e.g., a character or a word-piece. In contrast, area attention allows a model to attend to information with varying granularity by dynamically grouping adjacent items. For example, it can be a group of adjacent regions on an image that forms an object or a \\\"super pixel\\\", or multiple word pieces that form a phrase. The difference with regular attention is that we do not have to decide what the proper unit is. Rather, we let the model pick on the right level of aggregation of items or raw features. Such granularity or aggregation is acquired through learning. We have clarified this point throughout the paper.\\n\\n# the gains on BLEU and perplexity are limited. \\nWe reported BLEU scores on all the translation tasks including those previously with perplexity. We also added COCO40 Official test results for the image captioning tasks. Overall, for translation tasks, while the margin is not as large as we hoped, the accuracy gain with area attention is quite consistent throughout most conditions. Particularly on Token-level EN-DE translation, area attention achieved BLEU 29.68 that improved upon the state of the art results with a big margin. For image captioning, the accuracy gain is significant for both in-domain and out-of-domain tests.\\n\\nWe have improved the paper in many ways. Please see the complete list of changes we made in the Summary of Changes and the revised paper for details.\"}",
"{\"title\": \"Responses to Reviewer1's comments\", \"comment\": \"Thank you so much for the thorough feedback that really strengthens the paper. We have conducted additional experiments and revised the paper to address these points. We here respond each point in your review.\", \"re\": \"computational cost\\nWe reported the actual calculation speed of area attention in comparison to regular attention for two major model configurations (Transformer Base and Big) in section 4.1.1. Briefly, for Transformer Base model, on 8 NVIDIA P100 GPUs, each training step took 0.4 seconds for Regular Attention, 0.5 seconds for the basic form of Area Attention (Eq.3 & Eq.4), 0.8 seconds for Area Attention using multiple features (Eq.9 & Eq.4). \\n\\nWe have made a number of other improvements to the paper. Please see the summary of changes and the revised paper for details.\"}",
"{\"title\": \"Summary of Changes\", \"comment\": \"We thank the anonymous reviewers and readers for their thoughtful feedback and encouraging remarks. We have substantially improved the paper by making the following changes in the revision.\\n\\n1. Motivation & Goals\\nClarified the purpose of area attention throughout the paper. Area attention allows a model to attend to information with learned, varying granularity and levels of aggregation, which is in contrast to existing attention mechanisms focus on items with predetermined fixed granularity. Please read the revision for details.\\n\\n2. Related Work\\nAdded a Related Work section to clarify the relationship of area attention with each previous work suggested by Reviewer1 and readers.\\n\\n3. Presentation\\nIncreased the clarity of the writing throughout the paper, particularly improved the Pseudo code.\\n\\n4. Experiments\\n* Token-Level Translation\\nAdded a section for token-level translation experiments as requested by the reviewers and readers. The results are summarized in Table 1 & 2. Area attention consistently outperformed regular attention on token-level translation across all the conditions. In particular, it improved upon the state-of-art result on EN-DE with a significant margin.\\n\\n* Actual Cost\\nReported the actual calculation speed of area attention in comparison to regular attention for two major model configurations (Transformer Base and Big) in section 4.1.1.\\n\\n* Comparing Area Attention Keys\", \"added_a_comparison_of_the_two_forms_of_area_attention_keys\": \"the parameter free version (Eq.3) and the feature combination version (Eq.5-9) on all the token-level translation tasks (see Table 1 & 2).\\n\\n* BLEU for LSTM\\nReported BLEU scores for LSTM translation tasks as well (see Table 2 & 4).\\n\\n* Transformer Big\\nAdded experiments with Transformer Big where area attention has also shown consistent improvements.\\n\\n* Tests on COCO\\nAdded COCO40 official tests for in-domain image captioning (see Table 5). Area attention outperformed the benchmark model with a significant margin.\"}",
"{\"title\": \"Some important related studies are missing.\", \"review\": \"I have several concerns about this paper.\\n\\n[originality]\\nSome important related studies are missing.\\n\\n# Related studies about the perspective of \\u201carea\\u201d.\\nThe consecutive position in sequence is often referred to as \\u201cspan\\u201d in NLP filed, which is identical to what the authors call \\u201carea\\u201d in this paper.\\nThen, the idea of utilizing spans currently becomes a very popular in NLP field. We can find several papers, \\ne.g.,\\nWenhui Wang, Baobao Chang, \\u201cGraph-based dependency parsing with bidirectional lstm\\u201d, ACL-2016.\\nMitchell Stern, Jacob Andreas, Dan Klein, \\u201cA Minimal Span-Based Neural Constituency Parser\\u201d, ACL-2017.\\nKenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer, \\u201cEnd-to-end Neural Coreference Resolution\\u201d, EMNLP-2017.\\nNikita Kitaev, Dan Klein, \\u201cConstituency Parsing with a Self-Attentive Encoder\\u201d, ACL-2018.\\n\\nSimilarly, there are several related studies in image processing field,\\ne,g.,\\nMarco Pedersoli, Thomas Lucas, Cordelia Schmid, Jakob Verbeek, \\u201cAreas of Attention for image captioning\\u201d, ICCV-2017\\nQuanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo, \\u201cImage Captioning with Semantic Attention\\u201d, CVPR-2016.\\n\\n# Related studies about the perspective of \\u201cstructured attention\\u201d. \\nSeveral papers about structured attention have already been proposed, \\ne.g.,\\nYoon Kim, Carl Denton, Luong Hoang, Alexander M. Rush. \\u201cStructured Attention Networks\\u201d, ICLR-2017.\\nVlad Niculae, Mathieu Blondel. \\u201cA Regularized Framework for Sparse and Structured Neural Attention\\u201d, NIPS-2017.\\n\\n\\nI think the authors should explain the relations between their method and the methods proposed in the above listed papers.\\n\\n\\n[significance]\\n# Concern about experimental settings\\nThe experimental setting for NMT looks unnormal in the community.\\nCurrently, most of papers use sentences split in subword units rather than character units. I cannot find a reason to select the character units. I think the authors should report the effectiveness of the proposed method on the widely-used settings.\\n\\n\\n# computational cost\\nThe authors should report the actual calculation speed by comparing with the baseline method and the proposed method.\\nIn Sec. 2.2, the authors provided the computational cost. \\nI feel that the cost of O(|M|A) is still enough large and that can unacceptably damage the actual calculation speed of the proposed method.\\n\\n\\n\\nOverall, the proposed method itself seems to be novel and interesting.\\nHowever, in my opinion, writing and organization of this paper should be much improved as a conference paper. I feel like the current status of this paper is still ongoing to write.\\nThus, it is a bit hard for me to strongly recommend this paper to be accepted.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Experiments are not convincing\", \"review\": \"[Summary]\\nPaper \\u201cAREA ATTENTION\\u201d extends the current attention models from word level to \\u201carea level\\u201d, i.e., the combination of adjacent words. Specifically, every $r_i$ adjacent words are first merged into a new item; next a key and the value for this item is calculated based on Eqn.(3 or 7) and Eqn. (4), and then the conventional attention models are applied to these new items. The authors work on (char level) NMT and image captioning to verify the algorithm. \\n\\n[Details]\\n1.\\tIn the abstract, \\u201c\\u2026 Using an area of items, instead of a single, we hope attention mechanisms can better capture the nature of the task \\u2026\\u201d, can you provide an example to show why \\u201can area of items\\u201d can \\u201cbetter capture the nature of the task\\u201d? In particular, you need to show why the conventional attention mechanism fails.\\n2.\\tIn this new proposed framework, how should we define the query for each area including multiple items like words? For example, in Figure 1, what is the query for $n$-item areas where $n=1,2,3$.\\n3.\\tTwo different kinds of keys are proposed in Eqn. (3) and Eqn. (7). Any comparison between them?\\n4.\\tI am not convinced by the experimental results.\\n(4a) On WMT\\u201914 En-to-Fr and En-to-De, we know that \\u201ctransformer_big\\u201d can achieve better results than the three settings shown in Table 1 & 2. The results of using transformer_big are not reported. Besides, it is not necessary to use the \\u201ctiny\\u201d setting for En-to-{De, Fr} translation considering the data size.\\n(4b) It is widely adopted to use token-level neural machine translation. It is not convincing to work on char-level NMT only. Also, please provide the results using transformer_big setting.\\n(4c) There are no BLEU scores for the LSTM setting. Note that comment (4b) and (4c) are also pointed by anonymous readers.\\n(4d) It is really strange for me to \\u201ctrained on COCO and tested on Flickr\\u201d (See the title of Table 4). It is not a common practice in image captioning literature. Even if in (Soricut et al., 2018), the authors report the results of training of COCO and test on COCO (the Table 5). Therefore, the results are not convincing. You should train on COCO and test on COCO too.\\ne.\\tWhat if we use different area size? I do not find the study in this paper.\\n\\n[Pros & Cons]\\n(+) A new attempt of the attention model that tries to build the attention beyond unigrams.\\n(-) Experiments are not convincing.\\n(-) The motivation is not strong.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A few concerns\", \"review\": \"I prefer the idea of using some statistics (such as variances) of multiple items for attention.\\nThis direction may lead to better attention units for future works. \\n\\nI do not fully understand the argument, \\\"Attention mechanisms are designed to focus on a single item in the entire memory\\\". \\nIn my understanding, the attention formulation has no mathematical bias to focus on a single item. \\nI have been working on the enterprise NMT for years, and observed many cases where the attention weights concentrate in a few (not a single), tokens. \\nDo you have any comments? \\n\\nCould you show some concise examples that we really need to attend multiple (adjacent) items to boost the performance? \\nFor example, in char-based machine translation case, we can mimic the area attention with the wordpiece + token-wise NMT. \\nFor the image case, the adjacent area looks like a \\\"super pixel\\\". \\n\\nIt is unfortunate to observe that the gains on BLEU and perplexity are limited. \\nSince the authors do not provide any statistical tests, or a confidence interval of the scores, \\nI cannot be sure these gains are truly significant. \\nFrom my experiences +1.0 BLEU score is often insignificant in NMT experiments (BLEU variance is high in general). \\n\\nSummary\\n+ A new variant of attention, allowing attention to asses statistics of multiple items (such as variances) is interesting\\n- Claims are not so much convincing for the need of attending multiple adjacent items. \\n- Gains in experiments are limited.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Yes. We will make this point clear in the revision. Thanks for these suggestions that strengthen the contribution of our paper.\"}",
"{\"comment\": \"Thank you for the prompt response.\\n\\nNow the motivation is clear. \\\"Giving the model more options to attend to a range of items that are structurally adjacent when needed\\\" is a nicer motivation rather than arguing the softmax convergence and focusing on a single item is bad.\\nYou may want to make this point clear in the revision because the initial writing makes me feel the motivation is to tackle softmax convergence and the reviewers can possibly interpret like this too.\\n\\nNow token-level translation also works. Good jobs.\", \"title\": \"Wonderful\"}",
"{\"title\": \"Response on motivation, token-level performance and other points\", \"comment\": \"Thanks again for your further comments. Please see our responses here.\", \"re\": \"#8\\nYes. We will release the code as we implemented area attention directly based on the open source Tensor2Tensor library (https://github.com/tensorflow/tensor2tensor), where benchmark Transformer models and tasks are implemented, which guarantees a solid comparison with regular attention in Transformer.\", \"overall\": \"We clarify that our motivation with area attention is to give a model more options when attending to items. Area attention subsumes regular-attention rather than excluding it as explained earlier. Even the parameter-free version of area attention has outperformed regular attention on both character-level and token-level translation tasks, and image captioning tasks. We will add new experimental results to the revision.\"}",
"{\"comment\": \"#1\\nThis is not convincing. The papers you mentioned only shows a figure of attention probability of an example in the dataset. Those were just for demonstration with a single example, not an overall observation of the entire dataset, so we cannot infer a single figure as a universal truth. I think they did not claim or proved that softmax score converge to a particular item in their papers either. They also didn't show any statistics supporting this claim.\\n\\nIn your experiments, there is no statistic data supporting \\\"entropy of the attention probability distribution tends to decrease rapidly\\\".\\n\\nAnyway, there is nothing bad for attention softmax score to concentrate on a particular item, and that's what attention with softmax is used for. Attention acts as a word to word alignments, it should, inversely, attend on a particular item rather than spreading out equally.\\n\\n#4 transformer is simply not suitable for character-level (24.65 BLEU vs 27.3 in the original paper using BPE). This is obvious because it is harder to attend on sequence of hundreds to thousands characters than to attend on less than 100 words. Your solution makes the character-level problem easier and it makes sense. But there should not be a problem to begin with because they use token-level (BPE) translation. A possible reason (other than BLEU) to prefer character-level is to construct UNK words or rare words, but BPE already did that and there is no analysis on this in the paper anyway.\\n\\nHope the results on token-level translation are better than original transformer, otherwise, it seems the paper just create a problem for the proposed model to work.\\n\\n#5 agree that overall attention distribution will be different. But invariance within a region is important. For instance, what is the difference between \\\"army\\\" and \\\"mary\\\"? In such case, how can overlapping the characters with nearby words might be helpful to differentiate \\\"army\\\" and \\\"mary\\\"?\\n\\n# 6 Why the paper didn't report LSTM on BLEU but perplexity? Perplexity is usually not a good indication of quality of translation rather than BLEU, the transformer paper also said that they sacrifice perplexity for better BLEU. I'm not 100% sure, but improvement in order of 0.0001 sounds not significant though.\\nCan you report BLEU of LSTM?\\n\\n#7 This can be good for image captioning though. But I guess you miss Image Transformer paper (https://arxiv.org/abs/1802.05751). That is quite similar to this paper, should cite it.\\n\\n#8 can you release the code?\", \"overall\": [\"The motivation and problem (translation) is not convincing, proved or supported with statistical data to begin with. It is not shown that such characteristics of softmax (low entropy on convergence) is a problem for attention mechanism either.\", \"For translation, unless token-level experiments work, purposefully using character-level task seems just to create a problem in which area attention has the advantage over normal attention.\", \"Image caption task seems promising though. Perhaps it is more suitable and convincing to use this for vision than NLP.\"], \"title\": \"Not convinced\"}",
"{\"title\": \"Will cite & discuss the related work\", \"comment\": \"Thank you for bringing up this previous work that is indeed relevant. The paper focused on image captioning and proposed two nice methods for attending to object regions on images, where both use a special network to infer regions to attend. In contrast, our method examines all possible areas with summed area table for fast computation. The basic form of area attention we proposed is parameter free. We also intend to propose area attention as a general mechanism beyond captioning tasks. We will cite and discuss the paper in the revision.\"}",
"{\"comment\": \"I find this is an interesting approach to attention that could be broadly applicable.\\nI have been interested in those approaches for some time and am curious to see how it devellops.\", \"here_is_an_reference_that_is_relevant\": \"https://arxiv.org/pdf/1612.01033.pdf\\n(Areas of Attention for image captioning) \\n\\nCheers\", \"title\": \"Some related work\"}",
"{\"title\": \"Code\", \"comment\": \"Thank you for your interest in reading the work. We will make the pseudo code more readable and release the source code that is written in TensorFlow. Our experiments were conducted based on the original Transformer implementation released in Tensor2Tensor (https://github.com/tensorflow/tensor2tensor).\"}",
"{\"title\": \"Good suggestion\", \"comment\": \"Yes! Thanks for catching this. We will fix it in the revision.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for bringing up these questions. We briefly clarify them here and will address them further in the revision.\", \"re\": \"Question #5\\nThe overall attention distribution will be different, even though the representation for that specific area is the same. This is because area attention allows overlapping areas. The change in the order of items will cause these items to be picked up by different areas. For example, assume there is a sequence with six items: A, B, C, D, E, and F. Say Area 1 contains A, B and C; Area 2 contains C and D; and Area 3 contains D, E and F. Reordering C and D in Area 2 will not change Area 2\\u2019s representation. However, the reordering will leave Area 1 with A, B and D, and Area 3 with C, E and F.\"}",
"{\"comment\": \"Interesting work, but the algorithm seems to be ambiguous. Hope you can release the code to verity the details.\", \"title\": \"Some implementation or reproduction problem about this paper\"}",
"{\"comment\": \"I think in Eq. (5), it is more suitable by using \\\\sigma^2, not \\\\sigma, to denote the variance.\", \"title\": \"the notation for variance is usually \\\\sigma^2, not \\\\sigma\"}",
"{\"comment\": \"Hi, this idea is interesting, though need some explanations, can you explain some my concerns regarding the motivation and experiments?\\n\\n1. How are you sure that \\\"Although softmax (Equation 1) assigns non-zero probablities to every item in the memory, it quickly converges to the most probable item due to the exponential function used for calculating probabilities\\\" ? Can you prove this claim mathematically ? If not, is there any research out there already confirm this? if so, please cite.\\nI am not going to prove you wrong, but I have seen many examples that softmax doesn't converge into 1 single item. Look at some ImageNet classification papers (resnet, densenet....), the real life softmax scores distribute a lot more evenly. I guess it depends on the data, not by the convergence of softmax function.\\n\\n2. eqn 6 indicates ei has dimension 1xD, miu_i, sigma_i possibly also 1xD. So the term behind the Relu function will be also 1xD. Then the whole eqn 7 : W_d x relu(\\u00b5i + \\u03c3i + ei; \\u03b8) is a matrix dot product of DxD and 1xD, which is dimensionally incompatible?\\nCorrect me if i'm wrong.\\n\\n3. equation kri = Wd\\u03c6(\\u00b5i + \\u03c3i + ei; \\u03b8) looks weird. It is unusual to sum up the mean and variance together. variance is 2-degree term, should it be added to the mean (1-degree term)? Please justify.\\nTo me, it makes more sense to sum the mean the standard deviation rather than the variance.\\n\\n4. (Vaswani et al., 2017) experimented translation with BPE tokens, which already achieved more than this paper did with character-level experiments. What is the motivation to use character-level but not (at least) BPE or word-level translation? Why not do BPE experiment to compare with Vaswani et al.\\n\\n5. What happen if the elements in a particular area got reordered? Will the result after the attention be different or the same?\\n\\nThank you,\", \"title\": \"Some concerns about this paper\"}"
]
} |
|
Hyfn2jCcKm | Solving the Rubik's Cube with Approximate Policy Iteration | [
"Stephen McAleer",
"Forest Agostinelli",
"Alexander Shmakov",
"Pierre Baldi"
] | Recently, Approximate Policy Iteration (API) algorithms have achieved super-human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube and the 15-puzzle without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge. | [
"reinforcement learning",
"Rubik's Cube",
"approximate policy iteration",
"deep learning",
"deep reinforcement learning"
] | https://openreview.net/pdf?id=Hyfn2jCcKm | https://openreview.net/forum?id=Hyfn2jCcKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lEfrKE7E",
"Hyg6aULWxE",
"r1xDiKEcC7",
"Bkgout45CX",
"Skg0UYN5Cm",
"Bye2VTcq27",
"SJx3iUcc27",
"HJlaAp7wn7"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548158219837,
1544804036685,
1543289246785,
1543289202686,
1543289174182,
1541217587754,
1541215907533,
1540992469030
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper750/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper750/Authors"
],
[
"ICLR.cc/2019/Conference/Paper750/Authors"
],
[
"ICLR.cc/2019/Conference/Paper750/Authors"
],
[
"ICLR.cc/2019/Conference/Paper750/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper750/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper750/AnonReviewer2"
]
],
"structured_content_str": [
"{\"comment\": \"If it is receiving a reward of -1 for all other states, isn't this no longer a sparse reward problem? It is getting feedback for every move taken, which biases it against getting into cycles, and towards shorter solution lengths (in episodic problems).\\n\\nAlso, how long was Kociemba's solver given to run? Doesn't that solver eventually converge on the optimal solution, or was the first outputted solution what was compared to? This solver is widely used by the speedcubing community and it generally finds solutions in the low 20s or better within a couple of seconds. The algorithm itself states that the worse-case scenario of Kociemba is 30 moves, yet the graph seems to show solution lengths longer than this.\\n\\nThere is also relatively recent work in bi-directional search which has been capable of optimally solving the hardest positions (as opposed to just proving the solution length is <= 20 moves) reasonably quickly with far fewer node expansions- this might be useful to compare to.\", \"title\": \"Not Really Sparse Rewards?\"}",
"{\"metareview\": \"The paper introduces a version of approximate policy iteration (API), called Autodidactic Iteration (ADI), designed to overcome the problem of sparse rewards. In particular, the policy evaluation step of ADI is trained on a distribution of states that allows the reward to easily propagate from the goal state to states farther away. ADI is applied to successfully solve the Rubik's Cube (together with other existing techniques).\\n\\nThis work is an interesting contribution where the ADI idea may be useful in other scenarios. A limitation is that the whole empirical study is on the Rubik's Cube; a controlled experiment on other problems (even if simpler) can be useful to understand the pros & cons of ADI compared to others.\", \"minor\": \"please update the bib entry of Bottou (2011). It's now published in MLJ 2014.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work, but too focused on a particular problem\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for their helpful comments and for pointing us to the github resource.\\n\\n> \\u201cI am slightly disappointed that the paper does not link to a repository with the code. Is this something the authors are considering in the future?\\u201d\\n\\nWe fully agree with the reviewer that releasing the code is important. We plan to release the code if the paper gets accepted. We have not done so yet to maintain anonymity.\\n\\n> \\u201cI am also curious whether/how redundant positions are handled by the proposed approach...Does the algorithm forbid the reverse of the last action? Is the learned value/policy function good enough that backwards moves are seldom explored? Since the paper mention that BFS is interesting to remove cycles, I assume identical states are not duplicated. Is this correct?\\u201d\\n\\nWe did not strictly forbid reverse moves during the search. However, because we penalize longer solutions, because MCTS attempts many paths simultaneously, and because the virtual loss prevents duplicate exploration, the solver rarely explored repeat states. The BFS expansion of the path was a post-processing step we applied to the resulting path to obtain slightly better solutions. Although this did remove duplicates (if they existed), it more importantly allowed us to find \\\"shortcuts\\\" within our path. For example, we can replace say a 7-move sequence with a slightly more efficient 5-move sequence that MCTS didn't find. This effect was minimal but consistent.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank the reviewer for their helpful comments.\\n\\n> \\u201cWhat other problems can be solved like this?\\u201d\\n\\nThis approach can be used in two different types of problems. The first is planning problems in environments with a high number of states. The second type of problem is when you need to find one specific goal but might not know what the goal is. However, if you have examples of solved examples you can train a value function using ADI on these solved examples and hopefully it will transfer to the new problems. For instance, in protein folding, the goal is to find the protein conformation with minimal free energy. We don\\u2019t know what the optimal conformation is beforehand, but we can train a value network using ADI on proteins where we know what their optimal conformation is. \\n\\n\\n> \\u201cWould a single successful trajectory be enough to use it in a wider context? (as in https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/)\\u201d\\n\\nFor our method to work, all we need is the ability to start from the goal state and take moves in reverse. Therefore, not only is a single successful trajectory sufficient, all that is needed is the final state of that successful trajectory: the goal state. Using only the goal state, it can generate other states by randomly taking actions away from the goal state.\\n\\n> \\u201cIs the method to increase distance from final state specific to Rubik cube or general?\\u201d\\n\\nThe core concept is that the agent uses dynamic programming to propagate knowledge from easier examples to more difficult examples. Therefore, this method is applicable to any scenario in which one can generate a range of states whose difficulty ranges from easy to hard. For our method, we achieved this by randomly scrambling the cube 1 to N times. There has been other work in the field of robotics [1], as well as the work on Montezuma\\u2019s Revenge provided by the reviewer, that builds a curriculum starting by first generating states close to the goal and then progressively increasing the difficulty as performance increases. Instead of adaptively changing the state distribution during training, our method fixes the state distribution before training while the targets for the state values change as the agent learns.\\n\\n> \\u201cIs the training stable with respect to this or is it critical to get it right?\\u201d\\n\\nWe found that the value of N, the maximum number of times to scramble the solved cube, was not crucial to the stability of training. It only had an effect on the final performance. If N was too low (e.g. 5), then DeepCube only performed well on cubes close to the solutions, but not on more complicated cubes. If N was too high (e.g. 100), then it took more iterations to learn; nonetheless, the agent would still learn. We found that N=30 resulted in both good value function estimation as well as reasonable training time.\\n\\n[1] Florensa, C., Held, D., Wulfmeier, M., Zhang, M., & Abbeel, P. (2017). Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We would like to thank the reviewer for their helpful comments.\\n\\n> \\u201cI am not very clear how to assign the rewards based on the stored states?\\u201d\\n\\nThe environment returns a reward of +1 for the solved state and a reward of -1 for all other states. From this single positive reward given at the solved state, DeepCube learns a value function. Using dynamic programming, DeepCube improves its value estimate by first learning the value of states one move away from the solution and then building off of this knowledge to improve its value estimate for states that get progressively further away from the solution.\\n\\n> \\u201cDo you have solving time comparison between your method and other approximate methods?\\u201d\\n\\nYes, we have improved the efficiency of our solver since we last submitted our paper by optimizing our code. Our method takes, on average, 40 seconds; whereas the fastest optimal solver we could find (implemented by Tomas Rokicki to find \\u201cGod\\u2019s number\\u201d [1]) for the Rubik\\u2019s Cube takes 2.7 seconds. These results are summarized in section C of the appendix of the updated paper. While Rokicki\\u2019s algorithm is faster, Rokicki\\u2019s algorithm also uses knowledge of groups, subgroups, cosets, symmetry, and pattern databases. On the other hand, our algorithm does not exploit any of this knowledge and learns how to solve the Rubik\\u2019s Cube given only basic information about the problem. In addition, Rokicki\\u2019s solver uses 182GB of memory to run whereas ours uses at most 1GB. These differences are summarized in the updated paper. We are currently making better use of parallel processing and memory to improve the speed of our algorithm.\\n\\n[1] Rokicki, T., Kociemba, H., Davidson, M., & Dethridge, J. (2014). The diameter of the Rubik's Cube group is twenty. SIAM Review, 56(4), 645-670.\"}",
"{\"title\": \"A good paper\", \"review\": \"The authors provide a good idea to solve Rubik\\u2019s Cube using an approximate policy iteration method, which they call it as Autodidactic iteration. The method overcomes the problem of sparse rewards by creating its own rewards system. Autodidactic iteration starts with solved cube and then propagate backwards to the state.\\n\\nThe testing results are very impressive. Their algorithm solves 100% of randomly scrambled(1000 times) cubes and has a median solve length of 30 moves. The God\\u2019s number is 26 in the quarter turn metric, while their median moves 30 is only 4 hands away from the God\\u2019s number. I appreciate the non-human domain knowledge part most because a more general algorithm can be used to other area without enough pre-knowledges. \\n\\nThe training conception to design rewards by starting from solved state to expanded status is smart, but I am not very clear how to assign the rewards based on the stored states? Only pure reinforcement learning method applied sounds simple, but performance is great. The results are good enough with the neural network none-random search guidance. Do you have solving time comparison between your method and other approximate methods?\", \"pros\": [\"solved nearly 100% problems with reasonable moves.\", \"a more general algorithm solving unknown states value problems.\"], \"cons\": [\"the Rubik\\u2019s cube problem has been solved with other optimal approaches in the past. This method is not as competitive as other optimal solution solver within similar running time for this particular game.\", \"to solve more dimension cubes, this method might be out of time.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice idea but little study\", \"review\": \"The authors show how to solve the Rubik cube using reinforcement learning (RL) with Monte-Carlo tree search (MCTS). As common in recent applications like AlphaZero, the RL part learns a deep network for policy and a value function that reduce the breadth (policy) and depth (value function) of the tree searched in MCTS. This basic idea without extensions fails when trying to solve the Rubik cube because there is only one final success state so the early random policies and value functions never reach it. The solution proposed by the authors, called autodidactic iteration (ADI) is to start from the final state, construct a few previous states, and learn value function on this data where in a few moves a good state is reached. The distance to the final state is then increased and the value function learn more and more. This is an interesting idea that solves the Rubik cube, but the paper lacks a more detailed study. What other problems can be solved like this? Would a single successful trajectory be enough to use it in a wider context (as in https://blog.openai.com/learning-montezumas-revenge-from-a-single-demonstration/) ? Is the method to increase distance from final state specific to Rubik cube or general? Is the training stable with respect to this or is it critical to get it right? The lack of analysis and ablations makes the paper weaker.\\n\\n[Revision] Thanks for the replies. I still believe experiments on more tasks would be great but will be happy to accept this paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting deep-RL tweaks to solve problem with sparse reward\", \"review\": \"This paper introduces a deep RL algorithm to solve the Rubik's cube. The particularity of this algorithm is to handle the huge state space and very sparse reward of the Rubik's cube. To do so, a) it ensures each training batch contains states close to the reward by scrambling the solution; b) it computes an approximate value and policy for that state using the current model and c) it weights data points based by the inverse of the number of random moves from the solution used to generate that training point. The resulting model is compared to two non-ML algorithms and shown to be competitive either on computational speed or on the quality of the solution.\\n\\nThis paper is well written and clear. To the best of my knowledge, this is the first RL-based approach to handle the Rubik's cube problem so well. The specificities of this problem make it interesting. While the idea of starting from the solution seemed straightforward at first, the paper describes more advanced tricks claimed to be necessary to make the algorithm work. The algorithm seems to be quite successful and competitive with expert algorithms, which I find very nice. Overall, I found the proposed approach interesting and sparsity of reward is an important problem so I would rather be in favor of accepting this paper. \\n\\nOn the negative side, I am slightly disappointed that the paper does not link to a repository with the code. Is this something the authors are considering in the future? While it does not seem difficult to code, it is still nice to have the experimental setup.\\n\\nThere has been (unsuccessful) attempts to solve the Rubik's cube using deep RL before. I found some of them here: https://github.com/jasonrute/puzzle_cube . I am not sure whether these can be considered prior art as I could not find associated accepted papers but some are quite detailed. Some could also provide additional baselines for the proposed methods and highlight the challenges of the Rubik's cube.\\n\\nI am also curious whether/how redundant positions are handled by the proposed approach and wished this would be discussed a bit. Considering the nature of the state space and the dynamics, I would have expected this to be a significant problem, unlike in Go or chess. Does the algorithm forbid the reverse of the last action? Is the learned value/policy function good enough that backwards moves are seldom explored? Since the paper mention that BFS is interesting to remove cycles, I assume identical states are not duplicated. Is this correct?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
B1x33sC9KQ | ACIQ: Analytical Clipping for Integer Quantization of neural networks | [
"Ron Banner",
"Yury Nahshan",
"Elad Hoffer",
"Daniel Soudry"
] | We analyze the trade-off between quantization noise and clipping distortion in low precision networks. We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping. By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping. For example, just by choosing the accurate clipping values, more than 40\% accuracy improvement is obtained for the quantization of VGG-16 to 4-bits of precision. Our results have many applications for the quantization of neural networks at both training and inference time.
| [
"quantization",
"reduced precision",
"training",
"inference",
"activation"
] | https://openreview.net/pdf?id=B1x33sC9KQ | https://openreview.net/forum?id=B1x33sC9KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkluXjwElV",
"BygyCtul1E",
"SklGmbfykN",
"B1l-_UycR7",
"Bkeu2415AQ",
"HJeFnz1cCQ",
"SylOokVw6m",
"HygMWn2Dn7",
"HJegw7SEhQ",
"rJeQxP5nF7",
"BkgZu_Q3tQ",
"Hkg248istm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1545005855662,
1543698886704,
1543606553920,
1543267944703,
1543267504185,
1543266993420,
1542041503944,
1541028858076,
1540801367941,
1538201322553,
1538173032799,
1538139700429
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper749/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper749/Authors"
],
[
"ICLR.cc/2019/Conference/Paper749/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper749/Authors"
],
[
"ICLR.cc/2019/Conference/Paper749/Authors"
],
[
"ICLR.cc/2019/Conference/Paper749/Authors"
],
[
"ICLR.cc/2019/Conference/Paper749/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper749/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper749/AnonReviewer2"
],
[
"~Evgenii_Zheltonozhskii1"
],
[
"ICLR.cc/2019/Conference/Paper749/Authors"
],
[
"~Evgenii_Zheltonozhskii1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper describes a clipping method to improve the performance of quantization. The reviewers have a consensus on rejection due to the contribution is not significant.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"incremental work\"}",
"{\"title\": \"Reply to remaining issues\", \"comment\": \"The paper we mention (i.e., https://arxiv.org/pdf/1805.11046.pdf#page=9&zoom=100,0,96 ) assumes Gaussian distribution and construct a solution that couldn\\u2019t work unless tensors have approximately a Gaussian distribution. Due to the central limit theorem, neural network distributions are not general. In practice, tensors have a bell shape distribution where small values are much more frequent compared to the large values. Recent efforts take this prior into account and design quantizers with improved accuracy (e.g., https://arxiv.org/pdf/1804.10969.pdf). Our clipping method uses this bell-shaped distribution (we focus on Gauss/Laplacian distributions) to give higher precision where we need it (i.e., small values) at the expense of truncating very few large values. At the bottom line, our simulations prove that much better accuracy can be obtained with these assumptions. In all six evaluated models, we gain at least 12% validation accuracy improvement (in VGG16-BN we get 38% improvement) compared to GEMLOWP, which doesn\\u2019t assume anything about the data distribution and quantize according to max()-min().\\n\\nFigure 5 does not appear in the paper. From the context, we guess the reviewer refers to Figure 2. The figure shows that analysis is in a good agreement with simulations, and for each bit-width there exists a distinct minimum at a certain clipping value. The reviewer makes the following statment: \\u201cThe gaussian assumption is not true for lower bit networks (the paper you referred uses 8 bits)\\u201d Here is a paper that takes the Gaussian assumption for binary networks to explain why binary networks work in terms of high dimensional geometry (see page 2 about angle preservation property of random tensors from Gaussian distributions): https://arxiv.org/pdf/1705.07199.pdf\"}",
"{\"title\": \"reply to response\", \"comment\": \"1. Re the distribution assumption, the response from the authors is not convincing. The paper you mentioned (https://arxiv.org/pdf/1805.11046.pdf#page=9&zoom=100,0,96) says that, when using BN, \\\"quantization preserves the direction (angle) of high-dimensional vectors when W follows a Gaussian distribution\\\", this has nothing to do with your assumption that W follows a gaussian distribution.\\n\\nThe original question was not that \\\"gaussian -> low quantization error -> good performance\\\" (I think this is clear in the past 3 years) but rather \\\"non-gaussian -> high quantization error -> bad performance?\\\". Recent work suggests this may not always lead to bad performance (e.g. there are binary models with good performance and high quantization error). \\n\\nWhat does Figure 5 show? That quantization error is similar for analysis and simulation. Is this level of error \\\"small\\\"? Clearly, it depends on the number of bits. The gaussian assumption is not true for lower bit networks (the paper you referred uses 8 bits). Overall, the distribution assumption is a weakness.\\n\\n3. The point was about more datasets like VOC, beyond image classification. \\n\\nThank you for improving the paper, I have increased my rating appropriately.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"Reviewer addressed three concerns:\\n\\n1. It has been observed by many prior arts that quantization error can be assumed to be uniformly distributed (e.g., http://daniel-marco.com/Academic%20Files/Additive%20Noise%20Model.pdf). Our expression estimates the MSE as a combination of the MSE that results from the quantization error in the \\u201cmiddle part\\u201d (i.e., uniformly distributed) and the MSE that results from the clipping error at the tails of the distribution (i.e., laplacian/gaussian). We verify this assumptions using synthetic simulations that clearly show that this type of approximation is accurate in practice (see figure 2). We also provide the code for this simple synthetic experiment here: https://github.com/submission2019/AnalyticalScaleForIntegerQuantization . Finally, we now have a more general expression that estimates the MSE of any density function at the middle part using a piece wise linear approximation, enabling to use not only uniform distributions at the middle part. \\n\\n2. Since submission, we made a comparison against the only previous method we are aware of: the Kullback-Leibler Divergence (KLD) clipping method suggested by NVIDIA (see update for table 1). Our approach runs 4000 times faster compared to KLD and, excluding ResNet-101, outperforms KLD in terms of validation accuracy. \\n\\n3. We agree that the mean and sigma are continuous numbers. But the correct clipping value can be calculated by scaling the optimal clipping value for the standard gaussian distribution N(0,1) by sigma. We use this trick in our simulations. We improved the explanation of this issue (see Section 5 - end of the second paragraph)\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"The paper indeed provides a formula for optimal quantization when the distribution of tensor elements is either laplace or gauss. The paper also shows the relevance of these derivations to a very attractive use-case i.e., the conversion of full precision network to low precision network without time-consuming re-training or the availability of the full datasets. Our approach is shown to have significant advantages over previous approaches (as summarized in Table 1).\", \"response_to_more_specific_comments\": \"\", \"language_problems\": \"We have incorporated all typos and paraphrasing suggestions\\n\\n\\n1.\\u201cit's not clear a-priori that information loss is the property to minimize that maximizes performance of the quantized network.\\u201d The connection between quantization error and classification accuracy has been investigated through the preservation of the direction of the quantized tensor. See for example here:\\n a.\\thttps://arxiv.org/pdf/1805.11046.pdf#page=9&zoom=100,0,96 (section 5.1)\\n b.\\thttps://arxiv.org/pdf/1705.07199.pdf (section 3.1)\\nWe have added a detailed explanation about the connection between power of the quantization error and accuracy drop (see paragraph #5 in the introduction).\\n\\n2.\\u201cGive absolute accuracies too! Improvement relative to what baseline?\\u201d We now provide the baselines we use in our experiments (see Table 1). \\n\\n3. \\u201cThe mean square error should never go to 0. This suggests something is wrong. If it's just a scaling issue, consider a semilogy plot.\\u201d: This was indeed a scale issue only. It is not relevant anymore (the figure was removed and replaced by the synthetic experiments showing that analysis and simulations are in a good agreement). \\n\\n4.\\t\\u201cI'm unclear what baseline (no clipping) refers to in terms of clipping values. For uniform quantization there needs to be some min and max\\u201d: we improved the explanation of this issue in the introduction (see beginning of paragraph 9), where we explain that the traditional method that avoids clipping uniformly quantize the values between the largest and smallest tensor elements.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We conduct synthetic experiments showing analysis is in a very good agreement with synthetic simulations when distributions of tensor elements are either Laplace or Normal (see figure 2 in our new submission). The code to replicate these sanity checks appears here: https://github.com/submission2019/AnalyticalScaleForIntegerQuantization. We also improved the presentation of section 4 and provide a new figure to make the analysis easier to understand.\\n\\nAs noted by many prior arts, neural network distributions, are near Gaussian in practice, sometimes further controlled by procedures such as batch normalization. See for example here: \\n1. https://arxiv.org/pdf/1804.10969.pdf\\n2. https://openreview.net/pdf?id=B1IDRdeCW\\n3. https://papers.nips.cc/paper/5269-expectation-backpropagation-parameter-free-training-of-multilayer-neural-networks-with-continuous-or-discrete-weights. \\nIn addition, we were able to see these bell-shaped distributions through both statistical tests (KS-test at section 3) and the visual appearance of the histograms (see appendix). \\n\\nThe connection between quantization error and classification accuracy has been investigated through the preservation of the direction of the quantized tensor. See for example here:\\n1. https://arxiv.org/pdf/1805.11046.pdf#page=9&zoom=100,0,96 (section 5.1)\\n2. https://arxiv.org/pdf/1705.07199.pdf (section 3.1)\\nWe have added a detailed explanation about the connection between power of the quantization error and accuracy drop (see paragraph #5 in the introduction).\", \"detailed_comments\": \"1. Typo corrected.\\n2. For correctness, we believe it is enough to provide the primitive functions of these integrals (since this can be verified by differentiation). The direct derivations are long unnecessary tedious calculations. Also, C in $\\\\psi(x)$ is the standard constant of integration for indefinite integrals. To avoid unclarity we removed this constant now from the text. \\n3. We disagree with this comment. We have validated our work on six different models and improvement was dramatic with respect to gemmlowp for the quantization of activation tensors. Weight tensors are kept at 8 bit of precision so the bi-modal distribution of weights does not apply to our work. The activations are clipped before the ReLU as we clarify now in Section 5 (second paragraph).\\n4. Since submission, we made a comparison against the only previous method we are aware of: the Kullback-Leibler Divergence (KLD) clipping method suggested by NVIDIA (see update for table 1). Our approach runs 4000 times faster compared to KLD and, excluding ResNet-101, outperforms KLD in terms of validation accuracy. \\n5. We provide now a better intuition for the analysis in section 4. \\n6. For the uniform case, f(x) = 1/(2*alpha). We explicitly mention that in the paper now (just before equation 5).\\n7. Typo corrected.\"}",
"{\"title\": \"Errors and Contributions not significant\", \"review\": \"The paper describes a clipping method to improve the performance of one particular type of quantization method that is naive clipping to closest \\\"bins\\\". The contribution of the paper is the (possibly incorrect) derivation of the clipping value that causes the least quantization error IF assumptions can be made about the distribution of the parameters (in a non-bayesian sense). Thus, the significance is low due to both reasons.\\n\\nOne conceptual issue is the assumed relationship between quantization error and classification accuracy. The literature has shown that high quantization error does not necessarily mean low classification accuracy when using non-uniform quantization. The proposed clipping does not account for classification accuracy (on training set), but I understand the motivation being that the training set is not available. \\n\\n1. There seems to be an error in derivation of Eq (3), the first term should be $(x-sgn(x).\\\\alpha) = x+\\\\alpha$ for $x$ negative. Please comment on this.\\n\\n2. When solving the integrals, the authors simply pull the solution \\\"out of the hat\\\" and show that the derivative is the integrand. This is a very opaque presentation that we cannot see how you solved the integral. What is C in $\\\\psi(x)$?\\n\\n3. The assumptions on the parameters are only valid for the particular model/dataset/precision. The assumption does not generalize arbitrarily. For example, models with quantized weights have bi-modal distributions. How would you clip the activations after e.g. a ReLu? This is without going in to the weaknesses of the K-S test. \\n\\n4. Experiments do not show any comparison to the large body of prior work in this area. \\n\\n5. Page 4, para below (3), what is \\\"common additive orthogonal noise\\\"? You should explain or give intuition instead of simply referring to a different paper.\\n\\n6. In the uniform case, one would think f(x)=1/<range of the interval>=2\\\\alpha. Why is it 1/\\\\Delta?\\n\\n6. Section 4, range should be [-\\\\alpha, \\\\alpha] instead of [\\\\alpha, -\\\\alpha]? Since \\\\alpha is positive.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"review\", \"review\": \"This paper derives a formula for finding the minimum and maximum clipping values for uniform quantization which minimize the square error resulting from quantization, for either a Laplace or Gaussian distribution over pre-quantized value. This seems like too small a contribution to warrant a paper. I wasn't convinced that appropriate baselines were used in experiments. There were a number of statements that I believed to be technically slightly incorrect. There were also some small language problems (though these didn't hinder understanding).\", \"more_specific_comments\": \"\", \"abstract\": \"\\\"derive exact expressions\\\" -- these expressions aren't exact. they turn out to be based on a piecewise zeroth order Taylor approximation to the density.\", \"main_paper\": \"\\\"allow fit bigger networks into\\\" -> \\\"allow bigger network to fit into\\\"\\n\\\"that we are need\\\" -> \\\"that need\\\"\\n\\\"introduces an additional\\\" -> \\\"introduces additional\\\"\\nclippig -> clipping\\n\\nit's not clear a-priori that information loss is the property to minimize that maximizes performance of the quantized network.\\n\\n\\\"distributions of tensors\\\" -> \\\"distribution of tensor elements\\\"\\nthis comment also applies in a number of other places, where the writing refers to the marginal distribution of values taken on by entries in a tensor as the distribution over the tensor. note that a distribution over tensors is a joint distribution over all entries in a tensor. e.g. it would capture things like eigenvalues, entry-entry covariance, rather than just marginal statistics.\\n\\n\\\"than they could have by working individually\\\" -> \\\"than could have been achieved by each individually\\\"\\n\\nWhy the focus on small activation bit depth? I would imagine weight bit-depth was more important than activation bit depth. Especially since you're using ?32-bit? precision in the weight/activations multiplications, so activations are computed at a high bit depth anyways.\", \"table_1\": \"Give absolute accuracies too! Improvement relative to what baseline?\", \"sec_2\": \"sufficeint -> sufficient\\n\\\\citep often used when it should instead be \\\\citet.\\n\\\"As contrast\\\" -> \\\"In contrast\\\"\", \"section_3\": \"uniformity -> uniformly\\n\\nI don't believe the notion of p-value is being used correctly here w.r.t. the Kolmogorov-Smirnov test.\", \"figure_1\": \"The mean square error should never go to 0. This suggests something is wrong. If it's just a scaling issue, consider a semilogy plot.\", \"figure_2\": \"I'm unclear what baseline (no clipping) refers to in terms of clipping values. For uniform quantization there needs to be some min and max value.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A simple but not very convincing clipping method for activation quantization in deep networks\", \"review\": \"This paper empirically finds that the distribution of activations in quantized networks follow Gaussian or Laplacian distribution, and proposes to determine the optimal clipping factor by minimizing the quantization error based on the distribution assumption.\\n\\nThe pros of the work are its simplicity, the proposed clipping and quantization does not need additional re-training. However, while the key of this paper is to determine a good clipping factor, the authors use uniform density function to represent the middle part of both Gaussian and Laplacian distributions where the majority of data points lie in, but exact computation for the tails of the distributions at both ends. Thus the computation of quantization error is not quite convincing. Moreover, the authors do not compare with the other recent works that also clip the activations, thus it is hard to validate the efficacy of the proposed method.\\n\\nFor the experiments, the authors mention that a look-up table can be pre-computed for fast retrieval of clipping factors given the mean and sigma of a distribution. However, the mean and sigma are continuous numbers, how is the look-up table made? Moreover, how is the mean and std estimated for each weight tensor and what is the complexity?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Thanks for clarification!\", \"title\": \"Thanks\"}",
"{\"title\": \"reply to explanation and comparison of results\", \"comment\": \"We are not suggesting a quantization approach at the network level. Rather, we try minimizing the quantization effect at the tensor level only. We claim that when tensor values exceed a certain threshold, they should be clipped. Our main result is an analytical formula for clipping these values that, depending on the statistical distribution of the tensor, finds the *optimal* threshold (with respect to mean-square-error).\\n\\nWe focus on clipping only (i.e., no re-training or fine-tuning). Hence, we compare against the standard integer quantization approach that avoids clipping (i.e., GEMMLOWP). This serves as the baseline for the comparison in Table 1. You mention 65.75% top-1 accuracy for Res18 using our analytical clipping. Without clipping you would have 53.2% top-1 accuracy. We also have a similar result for the VGG-16 model, where we show that by just clipping correctly the activation tensors, you could gain more than 40% accuracy improvement compared to the case of no clipping. \\n\\nFinally, we are fully aware of the recent works you mention (in fact, PACT paper is cited and explained). Yet, these are completely not in the settings of our work. Both papers *learn* a good quantization through training. Our work is orthogonal and can work in synergy with these techniques. You can minimize the effect of quantization at the tensor level and, at the same time, compensate for quantization at the network level using training/fine-tuning. There are many other applications. For example, a rapid deployment of neural networks trained in full precision to low precision accelerators without having the full datasets on which the networks are working on.\\n\\nThe missing caption of subfigure 2f refers to Inception_v3.\"}",
"{\"title\": \"Explanation and comparison of resutls\", \"comment\": \"Hi, can you explain the numbers you present in Table 1? What do you compare to? Also, I haven't manage to find comparison to any other quantization paper, neither numbers of accuracy you achieved for any network. I've run your code and acquired 65.75% top-1 and 86.70% top-5 for ResNet-18. However, recent work, such as PACT (https://arxiv.org/abs/1805.06085) and LQ-nets (https://arxiv.org/abs/1807.10029) achieve significantly higher results for much coarser quantization - 69+% top-1 for 4 bit for both activation and weights.\\n\\nP.S. The caption of subfigure 2f is missing.\"}"
]
} |
|
BJxh2j0qYm | Dynamic Channel Pruning: Feature Boosting and Suppression | [
"Xitong Gao",
"Yiren Zhao",
"Łukasz Dudziak",
"Robert Mullins",
"Cheng-zhong Xu"
] | Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss. | [
"dynamic network",
"faster CNNs",
"channel pruning"
] | https://openreview.net/pdf?id=BJxh2j0qYm | https://openreview.net/forum?id=BJxh2j0qYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BygsuPjSqE",
"BJlgEOGIl4",
"HyeKKRLrlN",
"SJgBir11l4",
"H1g7fwG3JE",
"SkgSXSGny4",
"B1l4hUboJE",
"B1e6h0GUy4",
"rJxv5KuXRm",
"SJevI4GXRQ",
"Bkx2UnWm0m",
"rye68ibXCm",
"rklHZy28TX",
"B1eUMSd16X",
"SJlk4ADkaQ",
"S1euMaPJ67",
"Bkgib72i2X"
],
"note_type": [
"comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1555572595305,
1545115688427,
1545068160548,
1544643997209,
1544460043076,
1544459549411,
1544390316308,
1544068789078,
1542846863136,
1542820943431,
1542818899774,
1542818644841,
1542008572620,
1541534989857,
1541533223155,
1541532944397,
1541288707245
],
"note_signatures": [
[
"~Nikolaos_Fragkoulis1"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper748/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper748/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper748/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper748/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/Authors"
],
[
"ICLR.cc/2019/Conference/Paper748/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"There is a similar (if not identical) work already published here (https://arxiv.org/abs/1701.05221). Please at least consider adding it to the references\", \"title\": \"Similar Work\"}",
"{\"title\": \"Network performance results as requested.\", \"comment\": \"We tested VGG-16 and ResNet-18 with FBS against their respective baselines, the experiments were repeated 1000 times and we recorded the average wall-clock time results for each model.\\n\\nThe VGG-16 baseline observed on average 520.80 ms for each inference. FBS was applied to VGG-16 and reduced the amount of computation by a factor of 3.01x. Inference now took 175.13ms, thus achieving a speedup of 2.97x (in terms of wall-clock time). Similarly, a model with 4.00x computation reduction took 142.17 ms, which translates to a 3.66x actual speedup. This means that the overhead of our PyTorch implementation is less than 10%.\\n\\nA line-by-line profiling of our implementation revealed that the overhead of the extra computations introduced by FBS in convolutional layers are fairly minimal (we have annotated the percentage of execution time of each component here: https://imgur.com/YVQormC ). We found that the excessive data movements we mentioned earlier contribute to the majority of the observed overhead, while actual computations introduced by FBS amount to only 3.0% of the total time required to compute the layers. As we have suggested, the data movements are entirely redundant due to API limitations.\\n\\nOur FBS-based ResNet-18 provided a 1.98x reduction in the amount of computation, which took 63.73 ms for each inference, while the baseline required 101.82 ms, thus achieving a 1.60x real performance gain. We found that in addition to the overhead introduced by the FBS implementation above, the add operations for residuals cannot be accelerated in PyTorch for channel-wise sparse activations, and incur excessive copy operations as a result of the API limitations. Even with these limitations, the real speedup provided by FBS surpasses/matches the actual speedups of all other works compared in Table 1:\\n------------------------------------------------------------------ -------------- ------------- ------\\nMethod Top-5 error Theoretical Real\\n------------------------------------------------------------------ -------------- ------------- ------\\nSoft Filter Pruning (He et al., 2018) 12.22% 1.72x 1.38x\\nDiscrimination-aware Channel Pruning (Zhuang et al., 2018) 12.40% 1.85x 1.60x\\nLow-cost Collaborative Layers (Dong et al., 2017) 13.06% 1.53x 1.25x\\nFeature Boosting and Suppression (this work) 11.78% 1.98x 1.60x\\n------------------------------------------------------------------ -------------- ------------- ------\\n\\nWe hope this answers your concern regarding the actual performance gains.\"}",
"{\"metareview\": \"The authors propose a dynamic inference technique for accelerating neural network prediction with minimal accuracy loss. The method are simple and effective. The paper is clear and easy to follow. However, the real speedup on CPU/GPU is not demonstrated beyond the theoretical FLOPs reduction. Reviewers are also concerned that the idea of dynamic channel pruning is not novel. The evaluation is on fairly old networks.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"borderline\"}",
"{\"title\": \"don't cherry pick\", \"comment\": \"Please report the wall-clock time running the *whole network* on VGG-16 and ResNet-18, rather than cherry picking a specific layer to show speedup. The last column of Table 1 is not \\\"speedup\\\", but \\\"FLOP reduction\\\".\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"We would like to thank the reviewer for the positive comments.\\nA comparison to AMC [1] is included in Table 2, it is difficult for us to compare to Netadapt [2] since the networks considered are different.\\nWe would like to point out that Dong et al. [3] considered spatial dynamic execution, which eliminates computations at a finer granularity and is thus harder to accelerate compared to our channel-wise dynamic execution. On a CPU, we recently found that a single layer using FBS can increase inference speed by 3.91x, given a theoretical speedup of 3.98x.\\n\\n[3] More is Less: A More Complicated Network with Less Inference Complexity, CVPR 2017, https://arxiv.org/pdf/1703.08651.pdf\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"Thanks for the correction, we will change this to a more precise statement: \\\"FBS can reduce the FLOPs of VGG-16 by 5x and ResNet-18 by 2x\\\".\\n\\nWe tested on CPU one layer of VGG-16 (the 2nd convolution layer) with FBS using the new Pytorch 1.0 (JIT enabled), and achieved 3.91x speedup in wall-clock time when the FBS density is set to 0.5 (which yields a theoretical speedup of 3.98x). FBS achieves a wall-clock time of 12.780ms and the original convolution takes 49.942ms. This minor overhead is mostly due to the excessive data movements to dynamically gather a subset of weight parameters that cannot be eliminated because of the API limitations. We will put the details of this wall-clock time test in Appendix with open source code if accepted.\\n\\nGiven that relatively large blocks of compute can be omitted, it is realistic to suggest that in this case, FLOP reduction will translate into wall-clock time savings. We foresee no particular problems in doing this but existing hardware and tool-chains may currently prevent the necessary optimisations. We would certainly agree that if the optimisations focused on eliminating computations at a finer granularity that actual gains may be difficult to obtain.\"}",
"{\"title\": \"misleading to report \\\"FLOP reduction\\\" as \\\"speedup\\\"\", \"comment\": \"It's misleading to the community to report \\\"FLOP reduction\\\" as \\\"speedup\\\". FLOP reduction doesn't translate to speedup on hardware. If the authors wants to report the speedup, please report the wall-clock time support the below claim: \\\"FBS cam accelerate VGG-16 by 5\\u00d7 and improve the speed of ResNet-18 by 2\\u00d7\\\"\"}",
"{\"title\": \"further comments\", \"comment\": \"In the revision, the authors have made significant improvement over the original submission. I also appreciate that my main concerns regarding the original submission have been addressed.\"}",
"{\"title\": \"review comments on \\u201cDynamic Channel Pruning: Feature Boosting and Suppression\\u201d\", \"review\": \"This paper propose a channel pruning method for dynamically selecting channels during testing. The analysis has shown that some channels are not always active.\", \"pros\": [\"The results on ImageNet are promising. FBS achieves state-of-the-art results on VGG-16 and ResNet-18.\", \"The method is simple yet effective.\", \"The paper is clear and easy to follow.\"], \"cons\": [\"Lack of experiments on mobile networks like shufflenets and mobilenets\", \"Missing citations of some state-of-the-art methods [1] [2].\", \"The speed-up ratios on GPU or CPU are not demonstrated. The dynamic design of Dong et al., 2017 did not achieve good GPU speedup.\", \"Some small typos.\", \"[1] Amc: Automl for model compression and acceleration on mobile devices\", \"[2] Netadapt: Platform-aware neural network adaptation for mobile applications\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"We've updated a new revision of our submission.\", \"comment\": \"I hope this addresses weaknesses 2, 6 and 7 identified by your comments. We additionally included more comparisons against other works in Tables 1 and 2.\"}",
"{\"title\": \"Thank you for your comments.\", \"comment\": \"> \\\"the authors did not present a real-world application in\\n> which it is important to speed up by 2 or 3 times at a small \\n> cost, so it is hard to judge the real\\n> impact of the proposed method.\\\"\\n\\nOf course, all real systems are constrained by power and memory bandwidth. The proposed scheme offers very significant savings (2-3X in both compute and memory bandwidth) that would be beneficial in almost all scenarios, either to reduce power, increase performance or trade for better accuracy.\\n\\nAdditionally, we would like to point out that FBS works as an technique to accelerate network inference. Although it is entirely feasible to use it to accelerate training, we have not conducted relevant experiments.\"}",
"{\"title\": \"Reply to Reviewer 4\", \"comment\": \"Thank you for your comments.\\n\\n1. Re. motivation, to clarify we do increase performance as you state (2--5x) but in addition also make significant savings in terms of compute and memory bandwidth. These savings would be beneficial in almost all scenarios, either to reduce power, increase performance or trade for better accuracy. We have clarified this in our introduction.\\n\\n2. I think there is some misunderstanding here. By dynamically gating computation, FBS reduces both compute and memory requirements. We simply don't load/store the weights/activations for the suppressed channels. The newly added Table 3 quantifies these savings.\\n\\n3. We are working on generating data for newer models, but this might be limited by the amount of time available.\"}",
"{\"title\": \"Review for \\\"Dynamic Channel Pruning: Feature Boosting and Suppression\\\"\", \"review\": \"The authors propose a dynamic inference technique for accelerating neural network prediction with minimal accuracy loss. The technique prunes channels in an input-dependent way through the addition of auxiliary channel saliency prediction+pruning connections.\", \"pros\": [\"The paper is well-written and clearly explains the technique, and Figure 1 nicely summarizes the weakness of static channel pruning\", \"The technique itself is simple and memory-efficient\", \"The performance decrease is small\"], \"cons\": [\"There is no clear motivation for the setting (keeping model accuracy while increasing inference speed by 2x or 5x)\", \"In contrast to methods that prune weights, the model size is not reduced, decreasing the utility in many settings where faster inference and smaller models are desired (e.g. mobile, real-time)\", \"The experiments are limited to classification and fairly dated architectures (VGG16, ResNet-18)\", \"Overall, the method is nicely explained but the motivation is not clear. Provided that speeding up inference without reducing the size of the model is desirable, this paper gives a good technique for preserving accuracy.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"feature suppression to speed up training CNN\", \"review\": \"This manuscript presents a nice method that can dynamically prune some channels in a CNN network to speed up the training. The main strength of the proposed method is to determine which channels to be suppressed based upon each data sample without incurring too much computational burden or too much memory consumption. The good thing is that the proposed pruning strategy does not result in a big performance decrease. Overall, this is a nicely written paper and may be empirically useful for training a very large CNN. Nevertheless, the authors did not present a real-world application in which it is important to speed up by 2 or 3 times at a small cost, so it is hard to judge the real impact of the proposed method.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply to Reviewer 3 (1/2)\", \"comment\": \"Thanks for your review.\\n\\nWe would like to clarify some points to avoid misunderstandings.\\n\\nOur paper proposes a method called Feature Boosting and Suppression (FBS). FBS adds small auxiliary layers on top of each existing convolution. These auxiliary layers have trainable parameters that are optimized using SGD and control whether individual channels are evaluated at run-time or not. Using this conditional execution, the overall computation required is reduced significantly. Furthermore, the output of the auxiliary layers is used to scale each channel output. Channel saliencies are computed by the auxiliary layers on a per input basis. FBS utilizes sparse input channels (from the previous dynamically pruned convolutional layer) to predict which channels to skip in the output channels, so that we have large reduction in computations, as we exploit both input- and output-side sparsities.\\n\\nThe weaknesses identified by the reviewer (1,3 and 4) do not hold for the approach described above. We will address each of these comments in turn.\", \"introductory_statement\": \"\\\"firstly predicts the importance of each channel and then use an affine function to amplify/suppress the importance of different channels\\\"\\n\\nThis statement is not true. To clarify, the amplification of channels is dependent on the input (Equation 5), whereas the suppression process effectively performs important channel selection (Equation 6). Both yield strictly non-affine transformations on the batch normalized channel output. \\n\\n1. \\\"The idea of dynamic channel pruning is not novel. In my opinion, this paper is only an extension to Network Slimming (Liu et al., 2017).\\nWhat is the essential difference between the proposed method and Network Slimming?\\\"\\n\\nThe Network Slimming (NS) procedure is applied statically and only prunes channels away. Our technique is applied at run-time and is input dependent. We prune channels away and boost important channels at run-time.\\n\\nWe consider our method, FBS, to be very different from Network Slimming. For each input image during inference, FBS predicts the relative importance of each channel, and selectively evaluates a subset of output channels that are important for the subsequent layer, given the activation of the previous layer. Different input images would therefore activate drastically different execution paths in the model.\\n\\nFigure 3b corroborates this observation, as the heat maps show that many channels demonstrate high varying probabilities of being suppressed when being shown images of different categories. Our work is more related to runtime neural pruning [2] and conditional computation [3], where channels are dynamically selected for evaluation in each convolution, yet [2], [3] and FBS use very different methods to achieve this goal. In contrast, NS does not employ dynamic execution, as the pruned channels are *permanently removed* from the model, resulting in a network structure that remains static for all inputs where some capabilities will be permanently lost. \\n\\nIn addition, FBS preemptively steers feature attention: as FBS not only uses the saliency metrics to predicatively prune unimportant channels at run-time, it further amplifies important channels. The non-linearity added to the network is conceptually similar to Squeeze-and-Excitation (SE) [1], as FBS captures inter-dependencies among input channels and adaptively recalibrates output features in a channel-wise fashion. Even without pruning, FBS can improve the baseline accuracies of CIFAR-10 and ImageNet models (Section 4.2), which is absent from static/dynamic channel pruning methods including NS, RNP, [4] and others.\\n\\nBecause of the above differences, FBS can achieve a much improved accuracy/compute trade-off when compared to other channel pruning methods.\\n\\n2. \\\"The writing and organization of this paper need to be significantly improved. There are many grammatical errors and this paper should be carefully proof-read.\\\"\\n\\nWe will complete another round of polishing to address any shortcomings. Could you suggest how/where the organization of the paper could be improved?\"}",
"{\"title\": \"Reply to Reviewer 3 (2/2)\", \"comment\": \"3. \\\"The authors argued that the importance of features is highly input-dependent. This problem is reasonable but the proposed method still cannot handle it.\\nAccording to Eqn. (7), the prediction of channel saliency relies on a data batch rather than a single data. Given different inputs in a batch, the selected channels should be different for each input rather than a general one for the whole batch. Please comment on this issue.\\\"\\n\\nThe prediction of channel saliency *does not* rely on a batch of data. In equation (7), x_(l-1) is the output of the (l-1)-th layer, which comprises of C_(l-1) features, each feature has the spatial dimensions H_(l-1) * W_(l-1), as defined in Section 3.1. Throughout this paper, x_l for all layers is a single input image, which consists of multiple channels. Equation (7) reduces each channel in an image to a scalar, which is then used to predict the output channel saliencies in equation (8). Although this process is identical for each input image, each evaluation of equation (8) may produce drastically different predicted channel saliencies dependent on the input image.\\n\\nWe would like to update this section to remove any sources of ambiguity, would it be possible for you to describe how our intended meaning was lost?\\n\\n4. \\\"The proposed method does not remove any channels from the original model. As a result, both the memory and the computational cost will not be reduced. It is confusing why the proposed method can yield a significant speed-up in the experiments.\\u201d\\n\\nIt is hopefully clear from previous comments that this is not the case.\\n\\nTypically, convolutional layers are stacked to form a sequential convolutional network. Prior to computing the costly convolution, FBS uses the input (or the output from the previous layer) to predict the saliencies of output channels of the costly convolution. If an output channel is predicted to have a zero saliency, the evaluation of this output channel can be entirely skipped, as the entire output channel is predicted to contain only zero entries.\\n\\nIn addition, each convolutional layer takes as its input the output of the previous layer. This input can have channel-wise sparsity (channels consisting of only zero entries), if the previous layer is a convolutional layer. It is clear that these inactive input channels can always be skipped when computing the convolution.\\n\\nThe input- and output-side sparsities therefore doubly accelerate the expensive convolution and thus achieve a huge reduction in compute. Such reduction in computation is also seen in [2], as it shares the same goal but uses an entirely different method.\\n\\n5. \\\"The authors only evaluate the proposed method on shallow models, e.g., VGG and ResNet18. What about the deeper model like ResNet50 on ImageNet?\\\"\\n\\nThe method we propose is a per-layer method, which should not make a difference when targeting deeper models. Unlike NS, we do not rank channel importance globally to produce pruning decisions. We are working on generating results on deeper models, but this might be limited by the amount of time available.\\n\\n6. \\\"It is very confusing why the authors only reported top-5 error of VGG. The results of top-1 error for VGG should be compared in the experiments.\\\"\\n\\nWe will update Table 2 to include top-1 errors. However, some works we compare to, e.g. He et al.'s channel pruning [4], may have missing top-1 errors as they were not reported.\\n\\n7. \\\"Several state-of-the-art channel pruning methods should be considered as the baselines, such as ThiNet (Luo et al., 2017), Channel pruning (He et al., 2017) and DCP (Zhuang et al., 2018).\\\"\\n\\nThank you for pointing out these works. These are all static techniques. We will be including them in our comparisons. In addition, it should be noted that Channel pruning [4] is already in our comparison of Table 2.\\n\\n\\nWe thank the reviewer for providing this review.\\n\\nWe are in the process of updating this paper, and will notify you by comment of the new revision and its changes.\\n\\n[1]: Squeeze-and-Excitation Networks, CVPR 2018, https://arxiv.org/abs/1709.01507\\n[2]: Runtime Neural Pruning, NIPS 2017, https://papers.nips.cc/paper/6813-runtime-neural-pruning\\n[3]: Conditional Computation in Neural Networks for Faster Models, ICLR 2016, https://arxiv.org/abs/1511.06297\\n[4]: Channel pruning for accelerating very deep neural networks, ICCV 2017, https://arxiv.org/abs/1707.06168\"}",
"{\"title\": \"Review comments on \\u201cDynamic Channel Pruning: Feature Boosting and Suppression\\u201d\", \"review\": \"Summary:\\n\\nThis paper proposed a feature boosting and suppression method for dynamic channel pruning. To be specific, the proposed method firstly predicts the importance of each channel and then use an affine function to amplify/suppress the importance of different channels. However, the idea of dynamic channel pruning is not novel. Moreover, the comparisons in the experiments are quite limited. \\n\\nMy detailed comments are as follows.\", \"strengths\": \"1. The motivation for this paper is reasonable and very important. \\n\\n2. The authors proposed a new method for dynamic channel pruning.\", \"weaknesses\": \"1. The idea of dynamic channel pruning is not novel. In my opinion, this paper is only an extension to Network Slimming (Liu et al., 2017). What is the essential difference between the proposed method and Network Slimming?\\n\\n2. The writing and organization of this paper need to be significantly improved. There are many grammatical errors and this paper should be carefully proof-read.\\n\\n3. The authors argued that the importance of features is highly input-dependent. This problem is reasonable but the proposed method still cannot handle it. According to Eqn. (7), the prediction of channel saliency relies on a data batch rather than a single data. Given different inputs in a batch, the selected channels should be different for each input rather than a general one for the whole batch. Please comment on this issue.\\n\\n4. The proposed method does not remove any channels from the original model. As a result, both the memory and the computational cost will not be reduced. It is confusing why the proposed method can yield a significant speed-up in the experiments.\\n\\n5. The authors only evaluate the proposed method on shallow models, e.g., VGG and ResNet18. What about the deeper model like ResNet50 on ImageNet?\\n\\n6. It is very confusing why the authors only reported top-5 error of VGG. The results of top-1 error for VGG should be compared in the experiments.\\n\\n7. Several state-of-the-art channel pruning methods should be considered as the baselines, such as ThiNet (Luo et al., 2017), Channel pruning (He et al., 2017) and DCP (Zhuang et al., 2018)\\n[1] Channel pruning for accelerating very deep neural networks. CVPR 2017.\\n[2] Thinet: A filter level pruning method for deep neural network compression. CVPR 2017.\\n[3] Discrimination-aware Channel Pruning for Deep Neural Networks. NIPS 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJl2niR9KQ | Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer | [
"Hsueh-Ti Derek Liu",
"Michael Tao",
"Chun-Liang Li",
"Derek Nowrouzezahrai",
"Alec Jacobson"
] | Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow. | [
"adversarial examples",
"norm-balls",
"differentiable renderer"
] | https://openreview.net/pdf?id=SJl2niR9KQ | https://openreview.net/forum?id=SJl2niR9KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJlterMflE",
"HJxMD1N20Q",
"S1eSUQsK0X",
"BygeiOOKA7",
"rkl80bNdCX",
"H1gc6aNcaQ",
"S1xI5a4cTQ",
"ryeo7aN56X",
"SklLzoVcaX",
"B1lBHma23X",
"Hylylvwcn7",
"HyekoBIv37"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544852720869,
1543417689950,
1543250764988,
1543239832497,
1543156174156,
1542241729932,
1542241677541,
1542241571003,
1542241037974,
1541358397244,
1541203686942,
1541002647085
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper747/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper747/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"ICLR.cc/2019/Conference/Paper747/Authors"
],
[
"ICLR.cc/2019/Conference/Paper747/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper747/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper747/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper describes the use of differentiable physics based rendering schemes to generate adversarial perturbations that are constrained by physics of image formation.\\n\\nThe paper puts forth a fairly novel approach to tackle an interesting question. However, some of the claims made regarding the \\\"believability\\\" of the adversarial examples produced by existing techniques are not fully supported. Also, the adversarial examples produced by the proposed techniques are not fully \\\"physical\\\" at least compared to how \\\"physical\\\" adversarial examples presented in some of the prior work were.\\n\\nOverall though this paper constitutes a valuable contribution.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting contribution, although some concerns regarding the claims\"}",
"{\"title\": \"Re: Concerns with \\\"believability\\\" and other claims\", \"comment\": \"Indeed, current pixel-based attacks can fool classifiers with imperceivable perturbations. The magnitude of a perturbation is not the only factor that determines how realistic or plausible it is to occur in the real world. Figure 1 demonstrates, reductio ad absurdum, that very large pixel perturbations can be realistic if the perturbation is conducted in the physical parameter space (e.g., lighting). We have provided visualization of small perturbations in Figure 12. Specifically, Figure 12 shows perturbations with the same \\\\ell_\\\\infty norm across columns and magnifies the perturbations at each row differently for visualization purposes. However, the structure of imperceivable perturbation may still not correspond to any real-world scenario.\\n\\nOur claimed contribution is to construct adversarial examples through perturbing physical parameters of the image formation model. We leave physical world geometry attacks to future work as it involves a non-trivial computational fabrication engineering aspect.\\n\\nThanks, we've corrected the reference\"}",
"{\"comment\": \"The adversarial examples created here with the differentiable renderer are certainly cool. However I have some concerns with the claimed contributions.\\n\\nFirst, a key claimed contribution is that of believability. To test this, the authors set a given \\\\ell_\\\\infty \\\\epsilon threat model, and generate adversarial examples with each method. This is a flawed experiment: by fixing a threat model, the compared against methods will use the entire allowed perturbation bound to create adversarial examples. All methods are minor variations of Projected Gradient Descent, which will, by the nature of the underlying algorithm, use the whole allowed perturbation budget (i.e. will produce perturbations that extend to the edges of the allowed \\\\ell_\\\\infty box). Therefore the experiment in Figure 1 shows nothing about the \\\"believability\\\" of perturbations produced with the various methods (the spaceship here could likely be misclassified with an imperceptible \\\\ell_\\\\infty norm perturbation generated with PGD).\\n\\nThe authors also claim that their method extends to create physical world adversarial examples, but only show this with adversarial lighting (not color, geometry, or any of the other parameters listed in Table 1) on a single example (oranges) in front of a single, uniformly black, backdrop, at a single angle.\\n\\nAlso, the citation for Athalye et al (listed as Athalye and Sutskever) is wrong; it should be:\\n\\n@misc{athalye2017synthesizing,\\n title={Synthesizing Robust Adversarial Examples},\\n author={Anish Athalye and Logan Engstrom and Andrew Ilyas and Kevin Kwok},\\n year={2017},\\n eprint={1707.07397},\\n archivePrefix={arXiv},\\n primaryClass={cs.CV}\\n}\", \"title\": \"Concerns with \\\"believability\\\" and other claims\"}",
"{\"title\": \"Reviewer 3 comments\", \"comment\": \"Thank you for the rebuttal. I think my initial rating is still relevant even after the revisions of the authors.\"}",
"{\"title\": \"Thanks for the replies\", \"comment\": \"We changed the color of the updated text from green back to black as tomorrow is the end of the revision period. Thank you for all the replies.\"}",
"{\"title\": \"Authors' Reply\", \"comment\": \"# Comparisons with state-of-the-art\\nWe include direct comparisons to state-of-the-art differentiable renderers in Section 2 and Section 3.1. These clearly demonstrate our superiority with respect to speed and memory.\\n\\nConducting direct comparisons to state-of-the-art adversarial attacks is less well-posed. In the revision, we have expanded our feature comparison with a new table in Section 2. See further discussion in the Revision Summary post above.\"}",
"{\"title\": \"Authors' Reply\", \"comment\": \"# Comparisons\\nSee the Revision Summary post regarding a new comparison table.\\n\\n# Image-level perturbations\\nWe have toned down our statements in the introduction.\"}",
"{\"title\": \"Authors' Reply\", \"comment\": \"# Simulation for performance enhancement\\nWe thank the reviewer for pointing out related papers on this topic. They led us to many papers that demonstrate that models trained on synthetic data can outperform those trained on real data alone for real-world tasks. These references further strengthen our case for moving beyond the pixel-ball norms (Section 2, 5, and 6; highlighted in green).\\n\\n# Adversarial training\\nOur paper focuses on how to create adversarial attacks beyond the pixel norm-ball using physical parameters via a novel differentiable renderer. In the revision, we have improved the description of our preliminary application of this insight to adversarial training (a replicable description is now provided in Appendix F). A more exhaustive study of adversarial training is left as future work (additional discussion in Section 6).\"}",
"{\"title\": \"Revision Summary\", \"comment\": \"Thank you for your helpful comments and enthusiasm. In the revised document, we highlight all the changes in the green text. Our major changes are:\\n# Add references that use simulation to enhance network performance on real-world tasks (Section 2)\\n# Add detail of the adversarial training (Appendix F)\\n# Add future extension for the adversarial training (Section 6)\\n# Add a table comparison with previous non-image based adversarial attacks (Section 2)\\n# Tone down the argument on image-based adversarial attacks (Section 1)\\n# Typographical and reference issues\\n\\n# Comparison Feature Table (R1, R3)\\nWe have included a new feature comparison table in Section 2 highlighted in green. This table shows that while [Athalye 2017] generates adversarial colors on the surface geometry, that method cannot compute adversarial examples by perturbing the physical parameters we are focusing on (lighting and geometry). Therefore, our methods are complementary. \\nMeanwhile, [Zeng 2017] requires a non-trivial training phase to learn a proxy renderer. This training requires a substantial amount of data. Further, this data should be representative of scenes that will be witnessed at runtime, otherwise training-bias will occur. Even assuming high-quality training, the method of [Zeng 2017] still takes orders of magnitude longer to compute adversarial examples (12 minutes reported in [Zeng 2017] versus a few seconds using our method).\"}",
"{\"title\": \"The paper describes the use of differentiable physics based rendering schemes to generate adversarial perturbations that are constrained by physics of image formation. The paper demonstrates how data augmentation using the scheme can improve robustness of classifiers in a limited experimental setting.\", \"review\": \"Quality of the paper: The paper is quite clear on the background literature on adversarial examples, physics based rendering, and the core idea of generating adversarial perturbations as a function of illumination and geometric changes.\", \"originality_and_significance\": \"The idea of using differential renderers to produce physically consistent adversarial perturbations is novel.\", \"references\": \"The references in the paper given its scope is fine. It is recommended to explore references to other recent papers that use simulation for performance enhancement in the context of transfer learning, performance characterization (e.g. veerasavarappu et al in arxiv, WACV, CVPR (2015 - 17))\", \"pros\": \"Good paper , illustrates the utility of differentiable rendering and simulations to generate adversarial examples and to use them for improving robustness.\", \"cons\": \"The experimental section needs to be extended and the results are limited to simulations on CIFAR-100 and evaluation on lab experimental data. Inclusion of images showing CIFAR-100 images augmented with random lighting, adversarial lighting would have been good. The details of the image generation process for that experiment is vague and not reproducible.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, but please address questions\", \"review\": \"The paper demonstrates a method for constructing adversarial examples by modifications or perturbations to physical parameters in the scene itself---specifically scene lighting and object geometry---such that images taken of that scene are able to fool a classifier. It achieves this through a novel differentiable rendering engine, which allows the proposed method to back-propagate gradients to the desired physical parameters. Also interesting in the paper is the use of spherical harmonics, which restrict the algorithm to plausible lighting. The method is computationally efficient and appears to work well, generating plausible scenes that fool a classifier when imaged from different viewpoints.\\n\\nOverall, I have a positive view of the paper. However, there are certain issues below that the authors should address in the rebuttal for me to remain with my score of accept (especially the first one):\\n\\n\\n- The paper has no discussion of or comparisons to the work of Athalye and Sutskever, 2017 and Zeng et al., 2017, except for a brief mention in Sec 2 that these methods also use differentiable renderers for adversarial attacks. These works address the same problem as this paper---computing physically plausible adversarial attacks---and by very similar means---back-propagation through a rendering engine. Therefore it is critical that the paper clarifies its novelty over these methods, and if appropriate, include comparisons.\\n\\n- While the goal of finding physically plausible adversarial examples is indeed important, I disagree with the claim that image-level attacks are \\\"primarily tools of basic research, and not models of real-world security scenarios\\\". In many applications, an attacker may have access to and be able to modify images after they've been captured and prior to sending them through a classifier (e.g., those attempting to detect transmission of spam or sensitive images). I believe the paper can make its case about the importance of physical adversarial perturbations without dismissing image-level perturbations as entirely impractical.\\n\\n- The Athalye 18 reference noted in Fig 1 is missing (the references section includes the reference to Athalye and Sutskever '17).\\n\\n===Post-rebuttal\\n\\nThanks for addressing my questions. With the new comparisons and discussions wrt the most relevant methods, I believe the contributions of the paper are clearer. I'm revising my score from 6 to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but lacks comparison with state of the art\", \"review\": \"Summary:\\nThis work presents a method to generate adversary examples capable of fooling a neural network classifier. Szegedy et al. (2013) were the first to expose the weakness of neural networks against adversarial attacks, by adding a human-imperceptible noise to images to induce misclassification. Since then, several works tackled this problem by modifying the image directly in the pixel space: the norm-balls convention. The authors argue that this leads to non-realistic attacks and that a network would not benefit from training with these adversarial images when performing in the real world. Their solution and contributions are parametric norm-balls: unlike state-of-the-art methods, they perform perturbations in the image formation space, namely the geometry and the lighting, which are indeed perturbations that could happen in real life. For that, they defined a differentiable renderer by making some assumptions to simplify its expression compared to solving a light transport equation. The main simplifications are the direct illumination to gain computation efficiency and the distant illumination and diffuse material assumptions to represent lighting in terms of spherical harmonics as in Ramamoorthi et al. (2001), which require only 9 parameters to approximate lighting. This allows them to analytically derivate their loss function according to the geometry and lighting and therefore generate their adversary examples via gradient descent. They show that their adversary images generalize to other classifiers than the one used (ResNet). They then show that injecting these images into the training set increase the robustness of WideResNet against real attacks. These real attack images were taken by the authors in a laboratory with varying illumination.\", \"strength\": [\"The proposed perturbations in the image formation space simulate the real life scenario attacks.\", \"The presented results show that the generated adversary images do fool the classifier (used to compute the loss) but also new classifiers (different than the one used to compute the loss). As a consequence the generated adversary images increase the robustness of the considered classifier.\", \"Flexibility in their cost function allows for diverse types of attacks: the same modified geometry can fool a classifier in several views, either into detecting the same object or detecting different false objects under different views.\"], \"major_comments\": [\"Method can only compute synthetic adversary examples, unlike state-of-the-art.\", \"The main contribution claimed by the author is that their perturbations are realistic and that it would help better increase the robustness of classifiers against real attacks. However, they do not give any comparison to the state-of-the-art methods as is expected.\"], \"minor_comments\": [\"Even if the paper is well written, they are still some typos.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hygn2o0qKX | Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience | [
"Vaishnavh Nagarajan",
"Zico Kolter"
] | The ability of overparameterized deep networks to generalize well has been linked to the fact that stochastic gradient descent (SGD) finds solutions that lie in flat, wide minima in the training loss -- minima where the output of the network is resilient to small random noise added to its parameters.
So far this observation has been used to provide generalization guarantees only for neural networks whose parameters are either \textit{stochastic} or \textit{compressed}. In this work, we present a general PAC-Bayesian framework that leverages this observation to provide a bound on the original network learned -- a network that is deterministic and uncompressed. What enables us to do this is a key novelty in our approach: our framework allows us to show that if on training data, the interactions between the weight matrices satisfy certain conditions that imply a wide training loss minimum, these conditions themselves {\em generalize} to the interactions between the matrices on test data, thereby implying a wide test loss minimum. We then apply our general framework in a setup where we assume that the pre-activation values of the network are not too small (although we assume this only on the training data). In this setup, we provide a generalization guarantee for the original (deterministic, uncompressed) network, that does not scale with product of the spectral norms of the weight matrices -- a guarantee that would not have been possible with prior approaches. | [
"generalization",
"PAC-Bayes",
"SGD",
"learning theory",
"implicit regularization"
] | https://openreview.net/pdf?id=Hygn2o0qKX | https://openreview.net/forum?id=Hygn2o0qKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJxL1hieSV",
"BkxwTdDAJE",
"HJxnRD0pC7",
"HJgmILA607",
"BygQ30jpRm",
"r1lBxeop0X",
"HklxW5QoAQ",
"HJloFLguCX",
"B1eHBLxOAm",
"rkgv9MsD0Q",
"ryxUAhWDAm",
"SylZrfAXC7",
"SygikY27Am",
"BygoO_mfAQ",
"ByePH_GzR7",
"BJxhNVMz07",
"SygAMq2eCm",
"SJxW0Xje0X",
"Byeh4EbxCX",
"SJeNlQZlCQ",
"B1gAw-ZxAX",
"H1eypq036Q",
"rkxgiN236m",
"HJxRVajna7",
"r1eCm0tqTQ",
"B1x5VnK5T7",
"BJxLnKFq67",
"Hyxu-tF9TQ",
"rkgCLsBDT7",
"Bkgs5rO8TX",
"H1l1xiHMpQ",
"BJxgsmSfa7",
"ByxPcMBGam",
"H1gqC-a3hQ",
"SkxjMyes3m",
"H1eo-DD537"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1550003166043,
1544612031309,
1543526355572,
1543525962524,
1543515819086,
1543512044950,
1543350775911,
1543140995283,
1543140925092,
1543119502832,
1543081165947,
1542869561145,
1542863074623,
1542760562834,
1542756415204,
1542755379858,
1542666774489,
1542661064785,
1542620212417,
1542619883913,
1542619494511,
1542412983197,
1542403223642,
1542401333751,
1542262309589,
1542261809810,
1542261166434,
1542260991819,
1542048597766,
1541993875356,
1541720806671,
1541718935559,
1541718671113,
1541358034285,
1541238546716,
1541203714531
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/Authors"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper746/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Updates in Camera Ready\", \"comment\": [\"In order to make the discussion of our framework in Section 3 simpler, we got rid of some notation. The earlier version of our paper had some functions \\\"T_1, T_2, ... T_r\\\" and a constraint on them, which are no longer present.\", \"We have also improved our ReLU network bound by a factor of depth, D. In the earlier version, we had generalized O(D^2) noise-resilience related conditions from training data to test data, but now we generalize only O(D) such conditions, which helps us save a factor of D. Note that this update does not affect the structure of the proofs of generalization because we have abstracted most of it in Theorem 3.1. The only major change is in how we instantiate our framework for ReLU networks:\", \"we add some extra conditions on the spectral norms of the Jacobians, and in \\\"Main Lemma\\\" Lemma E.1 we additionally analyze how these spectral norms respond to parameter perturbations.\"]}",
"{\"metareview\": \"Existing PAC Bayes analysis gives generalization bounds for stochastic networks/classifiers. This paper develops a new approach to obtain generalization bounds for the original network, by generalizing noise resilience property from training data to test data. All reviewers agree that the techniques developed in the paper (namely Theorem 3.1) are novel and interesting. There was disagreement between reviewers on the usefulness of the new generalization bound (Theorem 4.1) shown in this paper using the above techniques. I believe authors have sufficiently addressed these concerns in their response and updated draft. Hence, despite the concerns of R3 on limitations of this bound and its dependence on pre-activation values, I agree with R2 and R4 that the techniques developed in the paper are of interest to the community and deserve publication. I suggest authors to keep comments of R3 in mind while preparing the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Thanks again\", \"comment\": \"Thanks for the prompt heads up. (Almost deleted it!)\\n\\nI guess if the comment was posted as a response to a private comment, it doesn't show up. This was the case with our \\\"Added plots of the bounds -- our bound works better for large D, small H\\\" comment which was originally not public.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for the clarification -- that was pretty confusing!\\n\\nWe have deleted the above comment (noting that we would love to respond to and discuss them if the reviewer wants to request a clarification from us). \\n\\nPlease let us know if \\\"Response to Reviewer 3's comment from 29 Nov (that is missing here?)\\\" needs to be deleted.\"}",
"{\"title\": \"Response to Reviewer 3's comment from 29 Nov (that is missing here?)\", \"comment\": \"Thanks for the authors feedbacks. It is great to discuss the problems.\\n\\nAs I have discussed with the author, I do think that the deterministic PAC-Bayesian bound itself maybe of interest if one can apply it to derive a stonger gengeralization bound. If the authors can demonstrate such superiority of the deterministic PAC-Bayesian bound by another example, I will further appreciate this result. \\n\\nHowever, my concern is the current derived theoretical result is not ease to interpret and there are quantities that heavily depend on empirical values (that can be very large). The product of norms of may not be good, but it provides an explicit way to control the capacity of the networks so that we can have guaranteed bounds. Also as I mentioned earlier, empirical studies have already shown that by explicitly controlling the spectral norms of weight to be (nearly) 1, the performance of the network is not affected so that the product of the spectral norm is not an issue (i.e., close 1). I am not sure how the pre-activation will be in such scenarios, but it seems highly likely that the pre-activation is still large. Removing the product of norms and introducing some empirical quantities may not be always good, especially such quantities are very sensitive to data and can results in even worse bounds than the product of norms. \\n\\nIn summary, I do repect the authors that they provide a different angle to view the problem. On the other than, I do think that what is needed for the generalization bound of neural nets is not a new result that can be vacuous and can not be guarantted to push the edge of better understanding/interpreting the bound. I have updated my score to reflect my such a concern.\\n\\n=============================\\nOUR RESPONSE\\n\\nWe thank the reviewer for their response, for increasing their score and for appreciating our new perspectives on the problem.\\n\\n\\n>>>> Also as I mentioned earlier, empirical studies have already shown that by explicitly controlling the spectral norms of weight to be (nearly) 1, the performance of the network is not affected so that the product of the spectral norm is not an issue (i.e., close 1). \\n\\nWe apologize for repeating ourselves a bit here, we are in disagreement with your point that the existence of spectral-norm-controlled networks makes our bounds and our claimed conceptual contributions & specific numerical improvements less interesting. If we understand your argument right, this argument is similar to saying that \\\"all extremely large vacuous bounds on extremely overparamterized networks are less interesting because there are relatively smaller overparameterized networks that generalize almost as well and on which a VC dimension bound would be smaller than the large vacuous bounds on the larger networks.\\\" The fact that extremely overparameterized networks exist and generalize well demands theoretical explanation, and this question is independent of other networks that may be either smaller or whose norms maybe controlled explicitly. \\n\\n>>>> \\\"The product of norms of may not be good, but it provides an explicit way to control the capacity of the networks so that we can have guaranteed bounds\\\"\\n\\nIt is not clear to us why the quantities in our bound \\\"can't\\\" be explicitly controlled. During training, one could potentially add regularizers that minimize the norm of the layers' outputs, the Jacobians norms of the layers, and maximize the pre-activation values.\\nOf course, this all maybe highly non-trivial and way beyond the scope of the paper, but we want to establish that our quantities are in no way different from the spectral norms of the matrices in terms of how and whether they \\\"can be controlled\\\" or not. For a better comparison, we believe the quantities in our bound are just as \\\"controllable/optimizable\\\" as the quantities in Arora et al.,\\n\\nBut more importantly, even if it is the case that our quantities can somehow not be controlled, we believe that evaluating the quality of ageneralization bound in terms of \\\"does it contain quantities that can be explicitly controlled?\\\" is an orthogonal goal to the theoretical question of \\\"what properties of deep networks -- trained with SGD, without any explicit regularization/norm control -- will help us understand why they generalize well?\\\" \\n\\n>>>> \\\"If the authors can demonstrate such superiority of the deterministic PAC-Bayesian bound by another example, I will further appreciate this result. \\\"\\n\\nWe understand and appreciate your request. We'd love to think about this to improve future versions of this paper. But we're afraid there's not much time left in the rebuttal period for us to provide a concrete answer to this, nor do we think we have the option to update the paper at this point.\"}",
"{\"title\": \"my concern about the significance of the result\", \"comment\": \"Thanks for the authors feedbacks. It is great to discuss the problems.\\n\\nAs I have discussed with the author, I do think that the deterministic PAC-Bayesian bound itself maybe of interest if one can apply it to derive a stonger gengeralization bound. If the authors can demonstrate such superiority of the deterministic PAC-Bayesian bound by another example, I will further appreciate this result. \\n\\nHowever, my concern is the current derived theoretical result is not ease to interpret and there are quantities that heavily depend on empirical values (that can be very large). The product of norms of may not be good, but it provides an explicit way to control the capacity of the networks so that we can have guaranteed bounds. Also as I mentioned earlier, empirical studies have already shown that by explicitly controlling the spectral norms of weight to be (nearly) 1, the performance of the network is not affected so that the product of the spectral norm is not an issue (i.e., close 1). I am not sure how the pre-activation will be in such scenarios, but it seems highly likely that the pre-activation is still large. Removing the product of norms and introducing some empirical quantities may not be always good, especially such quantities are very sensitive to data and can results in even worse bounds than the product of norms. \\n\\nIn summary, I do repect the authors that they provide a different angle to view the problem. On the other than, I do think that what is needed for the generalization bound of neural nets is not a new result that can be vacuous and can not be guarantted to push the edge of better understanding/interpreting the bound. I have updated my score to reflect my such a concern.\"}",
"{\"title\": \"Summary of discussions with Reviewer 3\", \"comment\": \"Over the course of this discussion we've done our best to address the different concerns raised by Reviewer 3. We think it'll be useful to have a quick summary of these. We thank them for their response so far and hope to continue the conversation until the rebuttal deadline so that as many of their concerns are addressed as possible.\\n\\n=========\\nSummary of their Nov 2 comment and our Nov 8 response\\n=========\", \"concern\": \"Demonstrate the superiority of the deterministic PAC-Bayesian bound by another example\", \"our_response\": \"This will certainly help improve future versions of this paper and we'll work on it. But we don't have the option of updating the paper, or much time left in the rebuttal period to think about this to provide a concrete answer.\"}",
"{\"title\": \"Part 2/2: Subjective concerns about our contributions and their significance\", \"comment\": \"Deriving a generalization bound on the original network is important as bounds on modified networks have limited explanatory powder. That has been the main premise and motivation of this paper, and we are happy to learn about your agreement with us on this!\\n\\nWe are also glad you effectively agree that a comparison with [1] is unfair.\\n============\\nOver to the subjective points about your claim of the paper, we believe it is simplistic to state that \\\"the major claim of this paper is about the generalization bound for neural nets rather than the deterministic PAC-Bayesian bound\\\".\", \"the_claim_of_the_paper_is_two_fold\": \"informally, \\\"a) here is a new method to use train-time noise-resilience of the network to derive a bound on the original network by generalizing noise-resilience and b) here's one particular way of characterizing noise-resilience (in terms of jacobians and pre-activations) and generalizing it gives us spectral-norm independent bounds; additionally, here's a particular regime where our bound can do better despite dependence on pre-activations.\\\" The claim of the paper is not \\\"here's a bound on the original network\\\" (which would only be Theorem 4.1).\\n\\nWhile the dependence on the pre-activation is something you find bothering -- and we do agree that is very, very reasonable -- the limitation of the dependence on pre-activations in Thm 4.1 is a limitation in how we characterize noise-resilience and not in how we generalize noise-resilience. \\n\\nYou might still ask \\\"why is 'generalizing noise-resilience' interesting? Why should I care about it if at this point, I do not know if it can help me provide stronger bounds for \\\"practically relevant\\\" deep networks (i.e., large H, not so large D)?\\\" \\n\\nFirst, while it is true that we do not have stronger bounds for (large H, small D) networks, the theoretical question of \\\"why do overparametrized networks generalize well?\\\" applies even for the (small H, large D regime).\\n\\nNext, most of the really strong (both non-vacuous and vacuous) bounds that we know so far apply only on modified networks. A BIG gap in these bounds is essentially about how to carry over the benefits of these bounds to the original network. Unfortunately, it might not be obvious how \\\"big\\\" a gap this is, because to the best of our knowledge, research so far has not explicitly focused on closing this gap. We believe that the pursuit of closing the gap and providing a bound on the original network is a highly significant and non-trivial pursuit as otherwise these existing papers would have achieved that. \\n\\nSo far, it seems like one has had to somehow modify the network -- either by dropping/modifying many of its parameters [1], or by adding noise to reduce the dependence of the parameters on the training data, or by doing both! [2,3] -- thereby \\\"cheating\\\" the actual question at hand about the original network, only to provide a strong generalization bound on a modified network. Our paper fills this significant conceptual gap here by providing the idea & specific technique of generalizing noise-resilience (Thm 3.1) and further illustrating its promise by showing how it can extend the benefits of noise-resilience to the original network's bound in a specific case -- even if it may not be a practically popular case. \\n\\n\\nEffectively, we provide a novel conceptual answer to a big piece in the puzzle and clearly demonstrate its benefits in a specific regime -- we think this will be valuable to the community and therefore worth publishing. Furthermore, our conceptual answer is quite general [i.e., Thm 3.1 is a general framework] and might inspire researchers to think about ways in which the multitudes of existing bounds on modified networks can be extended to their original networks.\\n\\n\\n\\n\\n\\n[1] Arora et al., Stronger generalization bounds for deep nets via a compression approach\\n[2] Zhou et al., Compressibility and Generalization in Large-Scale Deep Learning\\n[3] Dziugate and Roy, Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data\"}",
"{\"title\": \"Part 1/2: Factual concerns\", \"comment\": \"Thank you for the detailed response.\\n\\nBelow we first address the factual concerns you have..\\n\\n===================\\nIn your review you say \\\"you do not think the derived generalization bound is tighter than existing ones (e.g., [1,2])\\\", we suppose this is a typo and you mean [2,3]? We've compared our results only with [2,3]; as we said a comparison with [1] is extremely unfair. \\n=====================\", \"on_your_factual_concerns_about_our_plots\": \"While it may not be visually apparent, in Figure 2 (a), the maximum - minimum y value of the blue line is 11.66 - 8.9 = 2.28 while for the line corresponding to [3] is 7.58-3.75 = 3.83. (Note that the y value corresponds to the log of the bound). The amount by which our bound increases with depth is definitely smaller than the amount by which [2,3] increase; even a seemingly small difference in the rates of the increase results in an exponential difference of the actual bound. For these two lines (not the hypothetical versions!), the rates translate to 1.57^D vs 2.15^D specifically and we have mentioned this in the paper. Furthermore, Fig 2 (b) clearly demonstrates the tipping point where ours improves over [2,3]. We hope this clears up any question about the vagueness/validity of our claim that for large D and small H our bound does better. \\n\\nNext, the hypothetical versions of our bound are plotted for the sake of comparison with our own bound to demonstrate that the pre-activation values are indeed the limiting factor in our bound. In the discussion in the paper which begins \\\"We also plot hypothetical variations of our bound...\\\" we clearly state\\n\\n \\\".... perform orders of magnitude better than our actual bound (note that these two hypothetical bounds do not actually hold good) ... This indicates that the only bottleneck in our bound comes from the dependence on the smallest pre-activation magnitudes, and if this particular dependence is addressed, our bound has the **potential** to achieve tighter guarantees for even smaller D such as D = 8.\\\" \\n\\nWe have been careful and transparent in presenting these hypothetical variations and made sure not to draw any explicit comparisons with [2,3] here. \\n\\nIn short, we have NOT made any unfair comparisons!\\n\\n============\\n\\nThe point about the effectiveness of constrained spectral norm sounds quite interesting! Thanks for sharing it.\\n\\nHowever, we *strongly disagree* that it makes our result seem any less interesting: the fact that such a constrained-spectral-norm scenario works in practice, does not void the theoretical question of \\n\\\"What is a generalization bound on deep networks where the spectral norm each matrix has not been constrained to be 1 and typically lies around 2.1-3?\\\". The fact that our bound might show no improvements in your scenario does not invalidate whatever claim we make about (small H, large D, unconstrained spectral norm) \\n\\nWe understand that it is a worthwhile exercise to compare the polynomial dependence on depth/width and we agree that our bound has worse polynomial dependence on depth if we ignore the spectral norm terms. But it is not clear to us, from a theoretical point of view, why one would choose to ignore the existence of an exponential depth factor, and any possible improvement over that factor at the cost of extra polynomial dependence. \\n\\n===============\"}",
"{\"title\": \"Still not clear why the proposed bound is better than existing ones\", \"comment\": \"Thanks for the authors\\u2019 update and clarification. I do agree that the result that state the bound in terms of the original network is important (unlike [1]), and the derived deterministic PAC-Bayesian type of generalization bound may be of independent interest. But since the major claim of this paper is about the generalization bound for neural nets rather than the deterministic PAC-Bayesian bound, I tend to judge from a view of the former instead of the latter. I do not think the derived generalization bound is tighter than existing ones (e.g., [1,2]) in the scenarios of interesting/practical settings.\\n\\n1. The network with a small width is not an interesting setting in general. Both practice and recent theoretical efforts show that over-parameterization is more interesting in general, which can help both optimization and generalization.\\n\\n2. The claim that the derived result has better performance in increasing depths is too vague to see from the experiment results (e.g., Fig 2). It is ok to have the 5% and median plots as a way to see how the bound performs in the non-worst-case scenarios, but it is not fair to compare with [1,2]. I think only looking at the general bound (e.g., blue line in Fig. 2) is a fair game. There is no significant trend that the derived bound increases slower for a larger value of depth compared with [1,2]. \\n\\nOn the other hand, if I understand it correctly, the numerical results are obtained when there are no explicit constraints on weight matrices. The product of norms is indeed an issue in this case. However, it has been shown that using unit spectral norm weight matrices has as good empirical performance as those without such constraints in real tasks [4,5] (they have orthogonal weights). In the latter case, the product of spectral norms is simply 1, where I believe the bounds for [2,3] can be significantly lower without sacrificing the performance. It is not clear how the pre-act value will differ then, but it seems it will still be significantly larger than 1. In addition, when we only compare the polynomial dependence of the bound on the depth and width, the derived bound has a universally worse dependence on depth and the dependence on width is better only when the pre-act values are large over the entire parameter space and data (which seems highly impossible in practice). \\n\\n[1] Arora et al. Stronger generalization bounds for deep nets via a compression approach, 2018. \\n[2] Bartlett et al. Spectrally-normalized margin bounds for neural networks, 2017.\\n[3] Neyshabur et al. A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks, 2018.\\n[4] Xie et al. All you need is beyond a good init: Exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation, 2017.\\n[5] Huang et al. Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks, 2017.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for increasing your score and for taking into consideration our discussion with Reviewer 4! We thank Reviewer 4 too for their constructive feedback and active interest in helping us improve the quality of the paper.\"}",
"{\"title\": \"a) Unfair to compare with compressed network bound and b) despite pre-activations, our bound performs better for large D, small H\", \"comment\": \"Dear Reviewer,\\n\\nThanks for the response. We understand your concern is about i) a lack of comparison with Arora et al., and ii) how big the numerical value of our bound can be in comparison with Arora et al., and about the larger explicit polynomial dependence on depth. We have two concrete points to address your concern and we hope it helps you appreciate our result better:\\n\\n1. First, we would like to remind you, as we have stated at multiple points throughout our paper and in our earlier responses here, the bound of Arora et al., is NOT on the original network but on a compressed network (as has been noted by them in Remark (1) under page 4 of their arxiv version https://arxiv.org/pdf/1802.05296.pdf) \\nWhile they introduce a lot of interesting noise-resilience properties and show how a network can be compressed using those properties, their final bound which is small, holds only for the compressed network. Extending the benefits of noise-resilience in the form of a generalization bound on the original network is another non-trivial part of the puzzle, which is what we accomplish. It would be quite unfair to compare our bound with their bound on a compressed network because -- as we have stated everywhere -- our goal is to be able to say something about the original network. \\n\\nThe reason we care about a bound on the original network and not on the compressed network is that a bound on the compressed network could potentially tell us very, very little about the original network. For example, one can provide a compression bound by simply getting rid of all the parameters in the original network, and training a much smaller network from scratch on the given training dataset. Of course, a generalization bound on the smaller network will be small; but does it say anything at all about the original network? \\n\\n\\n2. We have been careful in stating everywhere in our paper that the 5% bound and median bound are just hypothetical quantities. Our main claim through the experiments is that our bound has better asymptotic behavior w.r.t depth -- at least for networks of small width -- as is evident from the reported slope of our actual bound named \\\"Ours\\\" vs Neyshabur+ '17 and Bartlett+ '17 Figure 2 a). In fact, we report the actual value of our bound vs these bounds for a really deep network and show that across multiple runs, the distribution of our bound concentrates over smaller quantities (Figure 2 b). Essentially, we have identified a regime (large D, small H) where, DESPITE the dependence on pre-activation values (which needs to be improved) our bound -- not just the hypothetical variations -- does better than existing bounds on the original network in practice. We hope that the existence of this regime in practice helps you appreciate the usefulness of our approach of generalizing noise-resilience, and the promise it holds in terms of providing bounds on the original network. \\n\\nThe reason our bound does better in this regime is that when H is small, the pre-activations tend to be large enough, and more importantly, when D is large, the product of spectral norms is exponentially larger than our terms including the extra D^2 in our bound. \\n\\nWe'd love to know if this addresses your concerns, or if you have further questions.\"}",
"{\"title\": \"how to compare this bound with previous results\", \"comment\": \"Thanks for the authors' update.\\n\\nI still do not quite understand what benefit this new result provides compared with existing ones. For example, Neyshabur\\u201918 and Bartlett\\u201917 have the bound of the order (spectral norm product)*sqrt(D^3 H rank/m) by ignoring log factors, and Arora'18 has the bound of the order (max function output)*sqrt(D^3 H^2 /m). This paper (Theorem 4.1) has the bound of the order sqrt(D^7 H max(1/(H pre-act^2), max Jacobian norm)/m). It seems that the order 1/pre-act can be even significantly larger than the spectral norm product and the max function output, which leads to an overall larger bound than existing ones. The empirical result of Arora'18 is not provided, which should be a lot better than Neyshabur\\u201918 and Bartlett\\u201917, hence the proposed bound as well. Moreover, the poly(depth, width) dependence is also stronger than the existing ones. I do not think using the 5% and median pre-act values are fair comparisons with other bounds, which could have been tighter as well if they also use analogous worst-case exempted results. \\n\\nThe analysis of the PAC-Bayes result in terms of the original function (Theorem 3.1) might be of independent interest here. But since the derived result for network functions is worse than existing ones (the dependence on depth/width and pre-act parameters), I do not see their significance in better understanding the generalization performance of neural nets here.\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your quick responses, for your useful suggestions, and for updating your score!\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for updating the figure. At this point, all my concerns are addressed properly and hence I updated the score.\"}",
"{\"title\": \"Updated Figure 2 (b)\", \"comment\": \"Hi! We replaced the table reporting a single value with a distribution of values from 12 different runs instead of just reporting averages which we think can be misleading here. Note that we have done this for D=28 instead of 26 as before. We hope we have addressed your above concerns through our previous response below and with the updated figure!\"}",
"{\"title\": \"We will update Table b\", \"comment\": \"Thanks for engaging in a discussion with us and for providing prompt responses -- we really appreciate it!\\n\\nWe are glad you agree with the asymptotic benefits of our bound. \\n\\nYour concern about Table b is understandable. The change in values is likely due to the fact that we used different training hyperparameters for D=26 (we will be sure to highlight the difference in the main text in the next revision, if the table persists). Training the networks beyond D=12 or 13 using vanilla SGD was tricky, and we realized we had to experiment with larger depths to convince the readers of the asymptotic benefits, so we had to pick a different D and resort to tuning the hyperparameters differently.\\n\\nWe appreciate your different suggestions about Table b and we will work on it. \\n\\nAs for H=1000, as we said we show plots for H=1280 in Figure 4 including the individual terms in the bound and the overall bound. The goal of the experiments in the main paper was to identify and showcase the specific regime where we can hope the pre-activation values to not spoil the benefits of generalizing noise resilience. Improving the dependence on the pre-activation is crucial to achieve reasonable bounds for larger widths.\"}",
"{\"title\": \"Thanks for adding the plot - some suggestions\", \"comment\": \"Thanks for adding the plot. I think it is very helpful and improves the quality of the paper. I understand that revisions take time and energy but I think there are two issues with the current Figure 2:\", \"more_important\": \"I agree with your conclusion that for sufficiently large D, your bound becomes lower than others. However, I find table (b) in Figure 2 a bit misleading. The main reason is that your bound is very sensitive to the value of pre-activations and hence if you train the same model with different random seeds, your bound gives very different values on each of the trained model. As a result, one cannot rely on reporting a single number here. Another thing that is a bit mysterious is that the slopes in figure (a) suggest that other bounds should be around 10^11 at depth 26 if they increase with the same rate but then their value is around 10^14 in table (b). So what happens between depth 13 and depth 26?\", \"i_can_think_of_three_solutions_here\": \"1) remove the table 2) report the average of 10 runs in table (b) 3) remove the table but extend the plot (a) to depth 30.\", \"less_important\": \"I requested evaluating the bound for a network with 1K hidden units in each layer because that is the number which is typically used in practice. I still believe 40 hidden units is too low and it would be better to have at least 256 hidden units but this is not very important and I'm not going to insist on this.\"}",
"{\"title\": \"Added plots of the bounds -- our bound works better for large D, small H\", \"comment\": \"We added experiments in the paper demonstrating the actual values of the bound in comparison with existing product-of-spectral-norm based bounds. We want to emphasize that our bound shows weaker dependence on depth, and performs asymptotically better with depth. Specifically, we show an improvement over two popular, existing bounds for $D=28$ and $H=40$. We argue that for larger depth, our bound promises greater improvements over product-of-spectral-norm-based bounds.\", \"note\": \"The paper was originally within 8 pages, but is now 8.5 pages because of the additional plots & their accompanying discussion.\"}",
"{\"title\": \"Added plots of the bounds -- our bound works better for larger D, small H\", \"comment\": \"Dear Reviewer,\\n\\nWe want to let you know that, like you've suggested, we've added Figure 2 in the main paper, demonstrating the value of our bound for different values of D, for H=40. We want to highlight that our bound has weaker dependence on depth and does better than other product-of-spectral-norm-based bounds for sufficiently deep, not-so-wide networks. We hope this helps you better appreciate the contribution and significance of our work.\"}",
"{\"title\": \"Incorporated all suggestions + demonstrated that our bound works better for sufficiently large D, small H\", \"comment\": \"Hi again!\\n\\nWe want to let you know that we've incorporated all your suggestions and presented some additional experiments too. \\n\\nSpecifically, in the main paper, we have demonstrated the value of our bound for H=40, and varying depth and compared with spectral-norm-bounds Neyshabur et al., '18 and Bartlett et al., 17. We argue that for this H, our bound should perform asymptotically better and show that our bound does better for D=25. \\n\\nDue to space constraints, we had to present some of the plots in the appendix. \\n\\n>>>>>>>>> Please fix the number of layers and plot the quantities vs \\\"#hidden units per layer\\\" as well (up to at least 2K hidden units per layer).\\n\\nThe plots in Appendix Figure 5 show the quantities and the overall bound (including existing bounds) for H=40 until 2000, for depth D=8.\\nAdditionally, Figure 6 shows a similar plot for depth D=14, for H=40 until 1280.\\n\\n\\n>>>>>>>> Please also report the numerical value of the generalization bound on a network with 1K hidden units and 10 layers.\\n\\nYou can find the plots in Appendix Figure 4 for no. of units H=1280, where we show both the individual quantities and the actual bound for different depths uptil D=14.\\n\\n\\n>>>>>>> If you have time, compare it to at least one of the other generalization bounds.\\nCompared our bounds with both Neyshabur et al., '18 and Bartlett et al., 17 which have pretty similar orders of magnitudes with each other. Please refer to Figure 2 in the main paper.\\n\\nWe are eager to hear back from you if you have any feedback or further questions, and would love to know your updated review.\"}",
"{\"title\": \"Incorporated suggestions 1,2 and 3(a)\", \"comment\": \"Hi! We wanted to let you know that we've uploaded a revision with suggestions 1 2 and 3(a) incorporated. We are still working on 3b.\\n\\n1. We're glad you find the theorem interesting. Indeed, we believe that the generality and the novelty in this theorem leaves a lot of opportunity for exploration by the both the deep learning theory community and the learning theory community.\\n\\n2. We moved the network-related notations to Section 4. In Section 3, we completely rephrased the description of \\\"INPUT-DEPENDENT PROPERTIES OF WEIGHTS\\\" and the description following Constraint 2, without using neural network notations. We also modified it to read better. We hope that the rewritten version of this discussion, and the additional text we've squeezed into Theorem 3.1 can help parse the notation more easily. However, we think it's hard to get rid of the other notations involving T, r, \\\\rho etc., which are integral to describing the abstract setup. Having said that, we are happy to consider further suggestions here! We really appreciate your above suggestions in this context and believe it helps reduce the burden on the reader.\\n\\n3 (a) Again, this is a good point and we have incorporated it as follows: \\nIn the last paragraph of \\\"Our Contributions\\\" we say:\\n\\\"Intuitively, we make this assumption to ensure that under sufficiently small parameter perturbations, the activation states of the units are guaranteed not to flip.\\\"\\nand again after Thm 4.1, we modified the paragraph at the end of page 7, and added the line:\\n\\\"Specifically, using the assumed lower bound on the pre-activation magnitudes we can ensure that, under noise, the activation states of the units do not flip; then the noise propagates through the network in a tractable, \\u201clinear\\u201d manner. Improving this analysis is an important direction for future work.\\\"\"}",
"{\"title\": \"Thank you for the suggestions!\", \"comment\": \"Dear Reviewer,\\n\\nThanks for considering our clarification and accepting it. Also, thanks for studying the paper more carefully and providing concrete, valuable feedback. We will work on them! \\n\\nCurrently, there are plots for dependence on width, upto 1280 hidden units, present in Figure 3 and 4. We will present more plots as soon as possible.\"}",
"{\"title\": \"Thanks for clarification + some feedback\", \"comment\": \"Thanks a lot for clarifying constraint 2. I think my confusion was because you have not mentioned the constraints in the Theorem 3.1 statement but used it in the proof of the theorem (and of course because I did not read the proof of Theorem 4.1 carefully). I have spent more time reading your paper and here is some feedback:\\n\\n1- I find Theorem 3.1 interesting and useful. First of all, please clearly mention the assumptions in the statement of theorem 3.1, i.e. constraint 1 and 2. \\n\\n2- There is too much notation in the paper. I understand that there is no easy way to figure out how to reduce the notation but this complexity hides the result of the paper and not many readers are willing to spend hours figuring out the notation. I suggest to put the neural net notation after the Theorem 3. With very simple notation, you should be able to write the assumptions and Theorem 3. I think this is the most interesting part of the paper and it worth spending time to present it properly.\\n\\n3- I believe Theorem 4.1 is needed to demonstrate how Theorem 3.1 can be useful but the limitations of Theorem 4.1 (which are not related to Theorem 3.1) should be discussed clearly. You already mentioned the main limitation which is the dependence of the bounds on the inverse of smallest pre-activation. I have two suggestions:\\na) Even though it is mentioned indirectly in the discussion, I think you should clearly mention early in the discussion that this limitation is due to the fact that the proof does not allow activations to flip. This helps the reader to have a better understanding of this limitation and potentially build on your work.\\n\\nb) Most plots show the quantities vs depth. Please fix the number of layers and plot the quantities vs \\\"#hidden units per layer\\\" as well (up to at least 2K hidden units per layer). Please also report the numerical value of the generalization bound on a network with 1K hidden units and 10 layers. If you have time, compare it to at least one of the other generalization bounds. To be clear, I am not going to evaluate your generalization bound based on these plots but what matters is that these plots help the reader to have a clearer picture.\\n\\nI am looking forward to the revision and then I will decide about the final score (up to 8 if all the suggestions are applied).\"}",
"{\"title\": \"Uploaded revision\", \"comment\": \"Hi! Based on Reviewer 1's feedback, we uploaded a revision with Appendix G that now describes and compares the noise-resilience conditions assumed in our work vs. the ones assumed in prior work. We believe that in addition to our earlier responses to your review, this section might better highlight how noise-resilience is studied in our paper.\\n\\nOverall, we hope our comments\\ni) clarify the main contribution of this paper, which lies in showing how noise-resilience of the network generalizes from training data to test data. \\nii) convince you that our analysis is not a standard application of PAC-Bayes theorems (and is on the contrary, quite nuanced and novel)\\niii) justify the title.\\n\\nWe are eager to know if you have any questions remaining; if your concerns have been clarified, we sincerely hope it helps you re-evaluate our paper and update your score.\"}",
"{\"title\": \"Uploaded revision\", \"comment\": \"Hi again!\\n\\nFirst of all, a quick note: we updated the label of Theorem F.1 to 4.1. Thanks for your note!\\n\\nNext, we'd like to get in touch with you again to know if we clarified your concern regarding Constraint 2. (By the way, please let us know in case we misunderstood your concern.) \\n\\nWe'd like to reiterate, like we state throughout the text of the main paper, we do not make any assumption that holds on all input datapoints. The lack of such an assumption is the main strength/contribution of the paper. We'd also like to point out that the mathematical statement of Constraint 2 and the text following it, and the mathematical statements of Theorem 3.1 and 4.1, all reflect this fact!\\n\\nIn the light of this discussion, we respectfully encourage you to reevaluate the paper & update your score. Thank you!\"}",
"{\"title\": \"Revision uploaded\", \"comment\": \"As you suggested, we have recalled some of the notation in the text preceding Theorem F.1 (which by the way is now Theorem 4.1 as it should be, thanks to Reviewer 4).\\nThanks for your suggestion!\"}",
"{\"title\": \"Thank you! Added detailed discussion on the conditions from prior work\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your positive feedback!\\n\\nWe have uploaded a revised version with Appendix G where we have added a one-page discussion relating our noise-resilience conditions and the conditions in prior work. We hope this provides you better context to understand our assumptions. Happy to provide more details if needed.\"}",
"{\"title\": \"Constraint 2 is NOT a shortcoming, and provably holds!\", \"comment\": \"Dear Reviewer, thanks for your precise summary of the paper's approach and your thoughts about it!\\n\\nWe strongly disagree with your remark that Constraint 2 is \\\"a major shortcoming of the paper\\\". Here's why:\\n\\nConstraint 2 is not restrictive and is in fact a very natural/intuitive constraint of the properties in the network -- and it provably holds good. At a high level, all the constraint says is the following:\\n\\n**For a given point x** for which the first r-1 sets of properties are bounded (say the first 3 layers have small l2 norm), the r-th property is noise-resilient (i.e., under noise injected into the parameters, the 4th layer's l2 norm does not suffer much change under parameter perturbation).\\n\\nThis is a pretty natural constraint **which provably holds** for networks because of how the output of a particular layer depends only on the output of the preceeding layers.\\n\\nWe make NO assumption of the form that something about the network holds good for ALL inputs in the domain. As you can see in Theorem 3.1, we say \\\"if W satisfies T_r(W, x, y) > Delta_r^* ... for all (x,y) in S\\\" which means that these properties are bounded only for the training data. \\n\\nWe hope this clears the misunderstanding surrounding the constraint and convinces you that this is not at a drawback at all!\\n\\nThe drawback that we acknowledge is regarding the dependence on the pre-activations, which we hope to improve upon in the future. But as it is, we believe the paper makes a conceptual contribution in terms of a new methodology of generalizing noise-resilience, and accomplishes a PAC-Bayes based product-of-spectral-norm independent bound in specific settings where it wasn't possible. \\n\\nAs you've suggested, we will improve the discussion of the constraints; thanks for your comment!\"}",
"{\"title\": \"Interesting paper - can be improved significantly\", \"review\": \"This paper presents a PAC-Bayesian framework that bounds the generalization error of the learned model. While PAC-Bayesian bounds have been studied before, the focus of this paper is to study how different conditions in the network (e.g. behavior of activations) generalize from training set to the distribution. This is important since prior work have not been able to handle this issue properly and as a consequence, previous bounds are either on the networks with perturbed weights or with unrealistic assumptions on the behavior of the network for any input in the domain.\\n\\nI think the paper could have been written more clearly. I had a hard time following the arguments in the paper. For example, I had to start reading from the Appendix to understand what is going on and found the appendix more helpful than the main text. Moreover, the constraints should be discussed more clearly and verified through experiments.\\n\\nI see Constraint 2 as a major shortcoming of the paper. The promise of the paper was to avoid making assumptions on the input domain (one of the drawbacks in Neyshabur et al 2018) but the constraint 2 is on any input in the domain. In my view, this makes the result less interesting.\\n\\nFinally, as authors mention themselves, I think conditions in Theorem F.1 (the label should be 4.1 since it is in Section 4) could be improved with more work. More specifically, it seems that the condition on the pre-activation value can be improved by rebalancing using the positive homogeneity of ReLU activations.\\n\\nOverall, while I find the motivation and the approach interesting, I think this is not a complete piece of work and it can be improved significantly.\\n\\n===========\", \"update\": \"Authors have addressed my main concern, improved the presentation and added extra experiments that improve the quality of the paper. I recommend accepting this paper.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Yes, it is important to derive bounds on the original network!\", \"comment\": \"Thank you for your positive response! We are glad you agree that many of the current generalization bounds for deep networks apply only to a compressed/stochastic network; indeed, even though these bounds provide valuable intuition about generalization, we believe that an extremely important and non-trivial piece of the puzzle is to extend the benefits of these bounds (or at least some of its benefits -- in this case the lack of a product-of-spectral-norm dependence) over to the original network. And we achieve this through an approach that \\\"generalizes noise resilience\\\".\\n\\nWith regards to your suspicion about the proposed \\\"conditions\\\", the only pesky condition in our result is the one involving the pre-activation values. The other bounds on the other quantities certainly hold favorably in practice as seen in our plots. We must also note that these conditions themselves are not the main contribution of our paper (and we have stated this point in \\\"Our Contribution\\\" in Page 3); the main contribution lies in how we generalize these conditions assumed about the network on the training data, to test data (without ever incurring a product-of-spectral-norms dependence). The conditions themselves are in fact philosophically similar to conditions examined and verified in prior work [1,2]; in essence, they dictate how the parts of the weight matrices activated by a particular datapoint, interact with each other. \\n\\nEven as far as the condition involving the pre-activation values are concerned, it appears in our analysis to ensure that the hidden units don't jump their non-linearity under parameter perturbations; the assumption that only a small proportion of the hidden units do not jump the non-linearity under perturbations has been made in prior works, although in a more relaxed form e.g., \\\"Interlayer Smoothness\\\" in [1] or condition C2 in [2], and *these have been verified in practice*. Intuitively, we believe that this assumption allows one to argue that the network is \\\"linear\\\" in a small local neighborhood in the parameter space, and this local linearity helps imply that the network has lesser complexity. \\n \\nAgain, we thank the reviewer for appreciating our contributions. We hope that the community finds our approach of generalizing noise-resilience useful. Our framework is general in that one could think of designing different sets of conditions that imply noise-resilience of the network, and argue how these conditions would generalize; with a better understanding of the source of noise-resilience in deep networks, we might identify better sets of conditions which can be generalized this way to obtain tighter bounds on the original network.\\n\\nWe will take note of the reviewer's comment about Theorem F.1!\\n\\n[1] Arora et al., \\\"Stronger generalization bounds for deep nets via a compression approach.\\\" \\n[2] Neyshabur et al., \\\"Exploring gen- eralization in deep learning.\\\"\"}",
"{\"title\": \"Dependence on pre-activation values is necessary to some extent\", \"comment\": \"We provide some context as to why the dependence on pre-activation values is not outrageous, and is to some extent necessary:\\n\\ta) Here's our intuition: the larger the pre-activation values, the less likely is it that, under parameter perturbations, the hidden units jump the non-linearity in the ReLU; in other words, the network is more likely to behave \\\"linearly\\\" under small perturbations. Roughly speaking, the more locally linear the network is, the simpler is the fit that the network has found, and hence better the generalization. \\n\\tb) The assumption that only a small proportion of the hidden units do not jump the non-linearity under perturbations has been made in prior works e.g., \\\"Interlayer Smoothness\\\" in [1] or condition C2 in [2], and *these have been verified in practice*. Overall, it is intuitively reasonable that a generalization bound depends on a quantity that characterizes this behavior. Currently, for our bound to be small, one would need that none of the hidden units jump the non-linearity, which as we admitted in the paper, does not reflect reality completely. Since our framework is quite general, with an even more careful analysis, in the future, one might be able to apply our framework for the case where this assumption is relaxed to better reflect reality (i.e., all but a small proportion of hidden units have a sufficiently large pre-activation value).\"}",
"{\"title\": \"Our contribution is a new refined/structured way to \\\"generalize noise-resilience\\\", not to explain noise-resilience\", \"comment\": \"Thanks for you comments! In this response, we'll address the second half of your comment and explain the contributions of the paper, which we believe has been misunderstood.\\n\\nWe first note that our contribution is not just about getting rid of the dependence on the products of the spectral norms of the weight matrices; our contribution is also that we arrive at such a bound on the *original network* and not just a compressed network/stochastic network. While compression-based bounds like [1] or other PAC-Bayes based bounds like [2,3] numerically evaluate to smaller values, and provide a partial answer for why deep networks generalize well, these bounds are not on the original network learned by SGD. An extremely important and **non-trivial** piece of the puzzle is to extend the benefits of these bounds (or at least some of its benefits -- in this case the lack of a product-of-spectral-norm dependence) over to the original network. \\n\\n\\nWe do this by presenting a structured and novel technique which \\\"generalizes noise-resilience\\\" presented in Section 3. Thus we disagree with the observation that our bound does not \\\"strictly tighten the error bound from a more refined/structured way.\\\" Below we describe what we mean by \\\"generalizing noise-resilience\\\", in effect justifying our title, and also clarifying what exactly our contribution is.\\n\\nLike in [1,2], we model noise-resilience in terms of certain \\\"conditions\\\". For example, [1] assume conditions like \\\"the interlayer smoothness of the network is sufficiently large on training data\\\". We assume similar conditions (e.g., \\\"the output of each layer has small l2 norm on the training data\\\") this allows us to bound the output perturbation of the network without incurring a product-of-spectral-norm dependence. Crucially, our theory and the theory in [1,2] assume these conditions to hold only **on training data**.\", \"with_reference_to_your_comment\": \"\\\"The difference with ... previous result due to the different way of bounding such a gap... But this does not explain how well a network can tolerate the noise\\\":\\n\\n While there are technical differences in how these conditions are formulated in [1,2] vs. our work, and how the perturbation in the output is bounded in terms of these conditions, the exact formulation of the conditions is NOT our key contribution. As mentioned in Page 3 under our contributions, our conditions are in fact philosophically similar to those in [1] and [2] and at a high level essentially characterize how the activated parts of the weight matrices in the network interact with each other. We strongly emphasize the following points:\\n\\n\\n=====> The novelty in our paper is NOT primarily about explaining why a network is noise-resilient (on training data). \\n\\n=====> Our main contribution, when compared to [1] or [2], is that we take a step beyond these existing approaches and present an approach to how conditions assumed about the network on the training data *can be generalized to test data*. This step is crucial and allows us to claim that the network is noise-resilient on test data as well. \\n\\n\\n The key reason [1,2] were not able to present product-of-spectral-norm independent bounds on the original network (but only on a modified network) was that they did not generalize these conditions about the behavior of the network from the training data to test data. \\n\\nTo achieve this, we present a structured approach that iterates through the layers and generalizes these conditions one after the other, in a specific order. It requires a lot of care to not incur product-of-spectral-norm dependency (or other extra dependencies on the width) while generalizing any of these multiple O(depth^2) conditions. Besides, to generalize each condition, we require a particular style of reducing PAC-Bayesian bounds to deterministic bounds. Overall, we hope you understand that our analysis is quite far from \\\"standard as in the PAC-Bayesian analysis, which is based on bounding the difference of the network before and after injecting randomness into the parameters\\\".\\n\\nThe idea of generalizing these conditions is novel and is an important step to explain the noise-resilience of these networks on testing data. Besides being refined and structured, most importantly, our approach is general and leaves scope for future work to use it as a hammer on different sets of conditions (hopefully one that doesn't assume large preactivation values on all units!).\\n\\n\\nWe hope our detailed response better explains the contribution of our work to answering the generalization puzzle, in the context of the results in [1,2].\\n\\n[1] Arora et al., \\\"Stronger generalization bounds for deep nets via a compression approach.\\\" \\n[2] Neyshabur et al., \\\"Exploring gen- eralization in deep learning.\\\"\\n[3] Dziugaite et al., \\\"Computing nonvacuous generalization bounds ... than training data.\\\"\"}",
"{\"title\": \"Review\", \"review\": \"This paper provides new generalization bounds for deep neural networks using the PAC-Bayesian framework. Recent efforts along these lines have proved bounds that\\neither apply to a classifier drawn from a distribution or to a compressed form of the trained classifier. In contrast, the paper uses PAC Bayesian bounds to \\nprovide generalization bounds for the original trained network. At this same time, the goal is to provide bounds that do not scale exponentially in the depth of the\\nnetwork and depend on more nuanced parameters such as the noise-stability of the network. In order to do that the paper formalizes properties that a classifier must \\nsatisfy on the training data. While these are a little difficult to understand in general, in the context of ReLU networks these boil down to bounding the l2-norms\\nof the Jacobian and the hidden layer outputs on each data point. Additionally, the paper also requires the pre-activations to be sufficiently large, which as the authors \\nacknowledge, is an unrealistic assumption that is not true in practice. Despite that, the paper makes an important contribution towards our current understanding of \\ngeneralization of deep nets. It would have been helpful if the authors had a more detailed discussion on how their assumptions relate to the specific assumptions in the papers\\nof Arora et al. and Neyshabur et al. This would help when comparing the results of the paper with existing ones.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"An honest work\", \"review\": \"The fact that a number of current generalization bounds for (deep) neural networks are not expressed on the deterministic predictor at stake is arguably an issue. This is notably the case of many recent PAC-Bayesian studies of neural networks stochastic surrogates (typically, a Gaussian noise is applied to the network weight parameters). The paper proposes to make these PAC-Bayesian bounds deterministic by studying their \\\"noise-resilience\\\" properties. The proposed generalization result bounds the margin of a (ReLU) neural network classifier from the empirical margin and a complexity term relying on conditions on the values of each layer (e.g., via layer Jacobian norm, the layer output norm, and the smallest pre-activation value).\\n\\nI have difficulty to attest if the proposed conditions are sound. Namely, the authors genuinely admit that the empirically observed pre-activation values are not large enough to make the bound informative (I must say that I truly appreciate the authors' candor when it comes to analyzing their result). That being said, the fact that the bounds does not scale with the spectral norm of the weight matrices, like previous PAC-Bayesian result for neural networks, is an asset of the current analysis.\\n\\nI must say that I had only a quick look to it the proofs, all of them being in the supplementary material along most of the technical details. Nevertheless, it appears to me as an honest, original and rigorous theoretical study, and I think it deserves to be presented to the community. It can bring interesting discussion and suggest new paths to explore to explain the generalization properties of neural networks.\", \"minor_comment\": \"For the reader benefit, Theorem F.1 in page 7 should quickly recall the meaning of some notation, even if it's the \\\"short version\\\" of the theorem statement.\\n\\n====\", \"update\": \"The bound comparison added value to the paper. It strengthens my opinion that this work deserves to be published. I therefore increase my score to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"PAC-Bayesian generalization bounds of deep neural networks based on the noise-resilience analysis\", \"review\": \"The authors demonstrate the generalization bound for deep neural networks using the PAC-Bayesian approach. They adopt the idea of noise resilience in the analysis and obtain a result that has improved dependence in terms of the network dimensions, but involves parameters (e.g., pre-activation) that may be large potentially.\\n\\nMy major concern is also regarding the dependence on the pre-activation that can be very large in practice. This is also shown in the numerical experiments. Therefore, the overall generalization bound can be larger than existing results, though the later have stronger dependence on the network sizes. By examining the analysis for the main result, it seems to me that the reason the authors can induce weaker dependence on network sizes is essentially they involved the pre-activation parameters. This can be viewed as a trade-off how strong the generalization bound depend on the network sizes and other related parameters (like the pre-activation here) rather than strictly tighten the error bound from a more refined/structured way. I also suggest that the authors provide the comparison of their bound and existing ones to see the quantitative difference of the results. \\n\\nRegarding the noise resilience, it is not clear to where the noise resilience shows up from the analysis or the result. From the proof of the main result, the analysis seems to be standard as in the PAC-Bayesian analysis, which is based on bounding the difference of the network before and after injecting randomness into the parameters. The difference with respect to the previous result due to the different way of bounding such a gap, where the Jacobian, the pre-activation and function output pop up. But this does not explain how well a network can tolerate the noise, either in the parameter space of the data space. This is different with the previous analysis based on the noise resilience, such as [1]. So, the title and the way the authors explain as noise resilience is somewhat misleading. More detailed explanation will help.\\n\\n[1] Arora et al. Stronger generalization bounds for deep nets via a compression approach.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJlh2jR9FX | Learning with Reflective Likelihoods | [
"Adji B. Dieng",
"Kyunghyun Cho",
"David M. Blei",
"Yann LeCun"
] | Models parameterized by deep neural networks have achieved state-of-the-art results in many domains. These models are usually trained using the maximum likelihood principle with a finite set of observations. However, training deep probabilistic models with maximum likelihood can lead to the issue we refer to as input forgetting. In deep generative latent-variable models, input forgetting corresponds to posterior collapse---a phenomenon in which the latent variables are driven independent from the observations. However input forgetting can happen even in the absence of latent variables. We attribute input forgetting in deep probabilistic models to the finite sample dilemma of maximum likelihood. We formalize this problem and propose a learning criterion---termed reflective likelihood---that explicitly prevents input forgetting. We empirically observe that the proposed criterion significantly outperforms the maximum likelihood objective when used in classification under a skewed class distribution. Furthermore, the reflective likelihood objective prevents posterior collapse when used to train stochastic auto-encoders with amortized inference. For example in a neural topic modeling experiment, the reflective likelihood objective leads to better quantitative and qualitative results than the variational auto-encoder and the importance-weighted auto-encoder. | [
"new learning criterion",
"penalized maximum likelihood",
"posterior inference in deep generative models",
"input forgetting issue",
"latent variable collapse issue"
] | https://openreview.net/pdf?id=SJlh2jR9FX | https://openreview.net/forum?id=SJlh2jR9FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyxTj-FlxE",
"HkebgVQXk4",
"rJxOVW7714",
"BkeaOAeg1E",
"BJlhz2qAAQ",
"SJlHDE5ARQ",
"S1xoTX5ARQ",
"Hyg41uYRRQ",
"r1lJnt8AAQ",
"S1eOz28sCm",
"BkeqBtejAm",
"Sye3Xh1sR7",
"SJlhsHJsAX",
"rJeJhKMcRX",
"rJlp3F0d2Q",
"rJl5OkRLnm",
"HklxkbeU3X",
"H1e43-v127",
"r1xoJGzy3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1544749477261,
1543873513028,
1543872816317,
1543667317083,
1543576596061,
1543574620713,
1543574467104,
1543571420488,
1543559591411,
1543363600185,
1543338305831,
1543334947556,
1543333283586,
1543281063065,
1541102005359,
1540968306447,
1540911319608,
1540481452304,
1540461026856
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper745/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper745/Authors"
],
[
"ICLR.cc/2019/Conference/Paper745/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The proposed \\u201cinput forgetting\\u201d problem is interesting, and the reflective likelihood can come to be seen as a natural solution, however the reviewers overall are concerned about the rigor of the paper. Reviewer 2 pointed out a technical flaw and this was addressed, however the reviewers remain unconvinced about the theoretical justification for the approach. One suggestion made by reviewer 1 is to focus on simpler models that can be studied more rigorously. Alternatively, it could be useful to focus on stronger empirical results. The method works in the experiments given, but for example in the imbalanced data experiments, only MLE is compared to as a baseline. I think it would be more convincing to compare against stronger baselines from the literature. If they are orthogonal to the choice of estimator, then it would be even better to show that these baselines + RLL outperforms the baselines + MLE. Alternatively, you mention some challenging tasks like seq2seq, where a convincing demonstration would greatly strengthen the paper. While the paper is not yet ready in its current form, it seems like a promising approach that is worth further exploration.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting problem and approach, but not quite rigorous enough.\"}",
"{\"title\": \"Have we addressed your concerns?\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks again for your review. We were wondering if we have addressed all your concerns and if you have further comments.\"}",
"{\"title\": \"Thank you for your insightful feedback.\", \"comment\": \"Thank you Reviewer 1 for your insightful comments. We answer your questions below.\\n\\n1--\\\"I don't find the motivations and logic behind the derivations to be rigorous\\\".\\n\\nWe explain the motivation behind the derivations as follows. Consider supervised learning with input/output pairs (x, y). Input forgetting happens--by definition--when the input x is not being taken into account by the neural network to predict the output y. This led us to consider looking at the maximum likelihood objective for a conditional model p_{\\\\theta}(y | x) but with an independent assumption on x and y. This leads to Eq. 5 in the draft. However fitting Eq. 5 corresponds to fitting a marginal distribution over y. This is input forgetting. We then relate the objective in Eq. 5 to the maximum likelihood objective of interest. This leads to Eq. 6 which shows that performing maximum likelihood corresponds to a tradeoff between fitting the marginal (the first term in Eq. 6) or minimizing the second term (this second term is the only term that contains information that relates x and y). However a learning algorithm might find it easier to fit the first term (the marginal over y) than to minimize the second term. We make sure to prevent fitting the marginal by regularizing maximum likelihood .\\n\\n2--\\\"Overall, it's very hard for me to swallow that there is a deficiency in maximum likelihood learning and that Equation 7 fixes this deficiency in just two pages of exposition.\\\"\\n\\nWe are proposing to regularize maximum likelihood learning to impose a stringer dependence between variables. We are not throwing away maximizing log likelihood completely. The RLL objective is the difference between the log likelihood and the log reflective probability of the outputs y. Our work is about noticing a common problem when fitting deep models with log likelihood and proposing a potential solution to fix the problem. There have been many methods proposed to regularize maximum likelihood. These \\\"penalized maximum likelihood\\\" objectives include Lasso (which regularizes maximum likelihood by penalizing the L1 norm of the parameters) and Ridge also known as weight decay in the deep learning literature (which regularizes maximum likelihood by penalizing the L2 norm of the parameters). The method we propose is a data-dependent regularization method that regularizes maximum likelihood by minimizing the log reflective probability of the outputs. There are even more alternatives to maximum likelihood in the statistics literature. See for example Generalized Estimating Equations (GEE) for longitudinal data. \\n\\nWe now provide further evidence that the proposed RLL objective promotes a stronger dependence between inputs and outputs. Consider a fixed \\\\alpha schedule \\\\alpha_n = \\\\alpha_0 for all n and 0 < \\\\alpha_0 < 1, we can show the RLL objective in Eq. 8 can be rewritten as:\\n\\nL_{RLL} = 1/N \\\\sum_{1}^{N} [ (1 - \\\\alpha_0) * \\\\log p_{\\\\theta}(y_n | x_n) + \\\\alpha_0 * PMI(x_n, y_n) ]\\n\\nwhere PMI(x_n, y_n) = \\\\log p_{\\\\theta}(y_n | x_n) - \\\\log p_{\\\\theta}^{refl}(y_n)\\n\\nEssentially this shows that RLL regularizes maximum likelihood by maximizing the pointwise mutual information between individual pairs (x_n, y_n). This is the reason why it promotes a stronger dependence between inputs and outputs.\\n\\n3--\\\"Moreover, there are still no theorems, clear mathematical definitions, or some simulations.\\\"\\n\\nWe did not think a theorem was needed. For example the consistency of the objective in Eq. 8 is a simple consequence of applying the law of large numbers and invoking the continuity of logarithm. \\n\\nHowever we would like to hear about what type of theorem you were expecting. \\n\\n4--\\\"For instance, can you show how the reflective likelihood changes analytically tractable / closed-form solutions? What would the 'reflective OLS estimator' be? These simpler, classical cases need addressed before I could be convinced that it fixes the problem.\\\"\\n\\nWe looked into this as well. In fact there is no closed form formula for the RLL objective on the usual linear model. One has to solve an estimating equation to find the RLL solution. This is expected because RLL uses a data-dependent regularizer.\\n\\nPlease let us know if our response answers your comments above. We are looking forward to your reply.\"}",
"{\"title\": \"Re: Response to Reviewer 1\", \"comment\": \"Thanks for your responses and revisions, authors. I do find the finite sample derivation in Equations 3-6 of the new draft to be clearer. It is an interesting observation.\\n\\nUnfortunately, my \\\"biggest issue\\\" of the lack of rigor still stands. While the derivations are rigorous in the new draft (i.e. I follow the algebra), I don't find the motivations and logic behind them to be. I don't find them to have a clear flaw per se, but I find them hand-wavy. Overall, it's very hard for me to swallow that there is a deficiency in maximum likelihood learning and that Equation 7 fixes this deficiency in just ~two pages of exposition. Moreover, there are still no theorems, clear mathematical definitions, or some simulations. For instance, can you show how the reflective likelihood changes analytically tractable / closed-form solutions? What would the 'reflective OLS estimator' be? These simpler, classical cases need addressed before I could be convinced that it fixes the problem.\"}",
"{\"title\": \"We are addressing the same problem.\", \"comment\": \"Dear Reviewer 2,\\n\\nWe are glad that you want to get to the bottom of the problem we are trying to address in the paper.\\n\\nWe are still addressing a peculiarity in maximum likelihood that causes \\\"input forgetting\\\". This peculiarity was wrongly stated in subsection 2.1 of version 1 of the paper. We are now highlighting this peculiarity in the introduction in the paragraph titled \\\"the finite sample dilemma of maximum likelihood\\\". \\n\\nThe main story of the paper is to say \\\"look there is this problem that keeps happening when fitting deep models with the maximum likelihood objective. This problem manifests itself in deep latent variable models as posterior collapse. This problem manifests itself in Seq2Seq conversation models as production of generic responses by the decoder. This problem manifests itself as failure to predict rare classes in classification under imbalance. All these issues can be nailed down to one common issue: the variables being conditioned upon are not taken into account by the neural network. In deep latent variable models the conditioning variable is the latent variable. In SeqSeq it is the output of the encoder. In classification, it is the input covariate x. We propose this objective that ties outputs to conditioning variables by subtracting a marginal to the original maximum likelihood objective. We think this objective should fix the problem. How does this objective fix the problem? We suggest looking at the KL divergence formulation of maximum likelihood (the problematic subsection 2.1 of version 1). We define the marginal in the proposed objective in supervised learning. We define the marginal in unsupervised learning as well. We now look at how this objective compares to maximum likelihood in a supervised learning problem and an unsupervised learning problem. We see that the objective outperforms maximum likelihood.\\\"\\n\\nThis is the same story both in version 1 and in the revision. What changed is the answer to the question \\\"How does this objective fix the problem?\\\". The response we provided in subsection 2.1 of version 1 was not correct. We fix this by using the \\\"finite sample dilemma of maximum likelihood\\\" argument in the introduction. \\n\\nFurther evidence of why the RLL objective works is its relationship to pointwise mutual information, KL divergences, and ranking.\"}",
"{\"title\": \"typo\", \"comment\": \"But you are now addressing something else.\"}",
"{\"title\": \"your original text\", \"comment\": \"In the original version, you wrote: \\\"Contributions. We identify a peculiarity in maximum likelihood learning that causes the input forgetting problem in Section 2.1. We then propose a new learning criterion to mitigate this issue.\\\" The intend was to address \\\"a peculiarity in ML learning\\\". But, we are addressing something else. This is a change of main theory (story) to me.\"}",
"{\"title\": \"You complained about subsection 2.1 and we addressed it. The main theory of the paper was not subsection 2.1 of version 1. The main theory of the paper has not changed.\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for replying to the rebuttal.\\n\\n1- Your concern---your whole review---was only about subsection 2.1 of the first version of the paper titled \\\"A peculiarity of maximum likelihood learning\\\". We would like to point out that this subsection was not the \\\"main theory of the paper\\\" as you suggest. The main theory of the paper was and is still to propose a new objective function that mitigates the \\\"input forgetting\\\" issue of maximum likelihood. We proposed this objective for supervised learning and for unsupervised learning with deep latent variable models. We then ran an empirical study which showed the RLL objective outperforms maximum likelihood in a classification under imbalance study and in a neural topic modeling experiment. \\n\\nSubsection 2.1 of the first version was about justifying why we propose the objective as a potential fix for the \\\"input forgetting\\\" issue of maximum likelihood. \\n\\n2- We addressed your concern by providing another justification for why the objective we propose makes sense. We do this in the revision in the paragraph of the introduction titled \\\"the finite sample dilemma of maximum likelihood\\\".\", \"if_there_is_one_thing_to_get_out_of_that_paragraph_it_is_this\": \"Eq. 6 shows that a learning algorithm may find it easier to increase the left hand side of the equality---the maximum likelihood objective---by increasing the first term rather than decreasing the second term. The problem is that the second term is the only term with information regarding how outputs y relate to inputs x. A natural thing to do to avoid this is to use the proposed RLL objective which penalizes maximization of the first term in the right hand side of Eq. 6.\\n\\n3- The reason why we restructured the paper is that the other reviewers complained that our paper as is was confusing. However this restructuring did not change \\\"the main theory of the paper\\\" which is (1) the new objective, (2) what it is for supervised learning, (3) what it is for unsupervised learning, and (4) its comparison to maximum likelihood in an empirical study.\\n\\nIf you are concerned that it is not clear why the RLL objective works, we explain this in the introduction of the revision. We also point you out to our message above regarding the connection of RLL to pointwise mutual information (PMI). \\n\\nThanks for your reply.\"}",
"{\"title\": \"question not answered\", \"comment\": \"The authors modified the paper, but did not explain how my concern is addressed. Although the theory is changed, the objective function remains the same. I am not convinced that the issue is addressed.\\n\\nIn addition, the purpose of the rebuttal process is to provide authors with the opportunity to clarify misunderstandings, NOT to change the main theory. If the main theory is flawed, the paper cannot be accepted in this round.\"}",
"{\"title\": \"New insights on the RLL objective: relationship to pointwise mutual information and KL divergences\", \"comment\": \"We extended section 4 with new findings. We unfortunately cannot post a new revision at the moment. We will add the new revision once it is allowed to upload revisions again.\\n\\nWe found the RLL objective in Eq. 8 can be rewritten as a convex combination between the log likelihood \\\\log p_{\\\\theta}(y_n | x_n) and the pointwise mutual information between x_n and y_n for each data pair (x_n, y_n) when the schedule for alpha is \\\\alpha_n = \\\\alpha_0 < 1. This new perspective is yet another proof that RLL promotes a stronger dependence between inputs and outputs. The pointwise mutual information term forces each output y_n to strongly depend on its corresponding input x_n. This justifies the huge gain in performance in classification under imbalance. Note pointwise mutual information is stronger than mutual information when it comes to dependence. This is because pointwise mutual information acts at the datapoint level whereas mutual information is an average. \\n\\nWe also found that for a fixed \\\\alpha schedule the RLL objective can be written as a difference of KL divergences: \\\\alpha_0 KL(p_{data}(y_n | x_n) || p_{\\\\theta}^{refl}(y_n)) - KL(p_{data}(y_n | x_n) || p_{\\\\theta}(y_n | x_n)). Maximizing the RLL is equivalent to fitting the conditional model p_{\\\\theta}(y_n | x_n) on the data while \\\"unfitting\\\" the unconditional model defined by the reflective probability p_{\\\\theta}^{refl}(y_n) on the data.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your in depth review. We are very happy that you enjoyed the connections we made to ranking losses. We think this was a cool finding as well! In fact there might be more connections: the coefficient \\\\alpha can be data-dependent and in that sense it induces a family of regularizers. The step-schedule for alpha (as defined in Eq. 11 of the paper) corresponds to ranking loss. We conjecture that there might be other connections when carefully choosing other schedules for \\\\alpha.\\n\\nWe think the \\u201cfinite sample dilemma\\u201d derivations address your remark about formalizing the fact that subtracting the marginal will make the optimization focus on capturing the dependencies. Please let us know if you think otherwise. \\n\\nWe agree with you on the calibration remark. We also found classification under imbalance to be a natural fit for testing the RLL objective. The results suggest RLL is doing what it is supposed to be doing i.e. strengthen the dependence between x and y.\", \"regarding_your_remarks_on_terminology_and_lack_of_rigor_in_the_discussion\": \"we clarified the exposition of the paper and formalized the explanation as to why the proposed objective works. Please see the general rebuttal above for details on what changed in this new version. We also hope you will find the time to read the revision.\", \"regarding_your_comment_on_connections\": \"thank you for bringing this up! Although it might seem that there is a connection to maximum entropy methods, there is no such connection unfortunately. We would like to point out that in your derivation, the quantity you define as entropy is not the entropy of q(y). This is because the expectation is not taken under q(y) but under p*(y | x)---the conditional distribution of y given x under the population distribution---which is different from q(y). This is why we did not mention connections to maximum entropy methods. Thank you for the reference! We have not looked into connections to Bayesian loss calibration.\\n\\nWe hope we have addressed your concerns. Please let us know if you have other remarks. We would appreciate it if you could read the revision and let us know if we have addressed all your concerns.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for reviewing our paper. We are very glad you find the idea potentially interesting and that the experimental results are encouraging.\\n\\nIn the first version of our paper, we made a wrong statement when justifying our proposed objective. This made the paper very confusing as you mentioned in your review. In light of this feedback, we refactored the paper with a new perspective on why regularizing maximum likelihood with the log reflective probability is a good thing to do. The results did not change because the method is the same. It is the explanation behind why our proposed method works that changed. We replaced the section 2 of version 1 with the paragraph titled \\u201cthe finite sample dilemma of maximum likelihood\\u201d in the introduction. We chose to add this paragraph in the introduction for sake of clarity and because we want the reader to grasp the intuition behind the proposed objective early on. \\n\\nWe hope these changes address your earlier concerns.\", \"regarding_your_question_on_connections_to_mutual_information\": \"our objective increases the dependence between inputs and outputs as evidenced in the empirical study section and as motivated by the finite sample dilemma of maximum likelihood. In this sense the RLL objective is implicitly related to mutual information which is a measure of dependence. However, there is no mathematical equation that directly relates RLL and mutual information.\", \"regarding_your_question_on_asymptotics\": \"for infinite data the criterion in Eq. 8 (which is the RLL...the one we use) converges in probability to the difference between the true maximum likelihood objective (the one using expectations under the population distribution) and the log reflective-probability (we define this in the paper...it is some marginal over the output y). In this sense our finite-data objective in Eq. 8 is a consistent estimator of the true objective. You get this result using the law of large numbers and continuity of logarithm.\\n\\nWe hope we have addressed your concerns. Please let us know if you have further remarks.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for taking the time to review the paper. We corrected the statement and refactored the paper to reflect this. We hope you will be able to read the revision and verify that your concerns have been addressed. Please let us know if you have any further questions.\"}",
"{\"title\": \"Rebuttal: In response to reviewer feedback we refactored the paper. The idea is the same, the results are the same, the exposition is different.\", \"comment\": \"We thank all the reviewers for taking time to review our paper. Your feedback has greatly helped us revise the paper. The two main concerns from all three reviewers were that (1) the equivalence statement made in section 2 of the first version of the paper was incorrect and (2) the paper is very confusing. We rewrote the paper to address these two issues. We corrected the equivalence statement and refactored the paper to further clarify the intuitions and technical details behind our proposed idea. We hope the reviewers will read the revision. We apologize for taking time to post the revision. We wanted to make sure we addressed all the concerns.\\n\\nWe want to draw attention on the importance of the issue tackled by the paper. The most ubiquitous learning objective for deep models is maximum likelihood. It works very well in practice in most cases. However there are cases where maximum likelihood leads to poor behavior. For example it leads to posterior collapse in deep latent-variable models. Furthermore, it causes lack of diversity in generated responses in Seq2Seq conversation models. Finally, it struggles to learn useful features for rare classes in classification when the class distribution is highly skewed. All these issues can be summarized into one main behavior: the variables being conditioned upon are not taken into consideration by the deep network. We call this problem \\\"input forgetting\\\". We identify a potential cause of this issue and propose the RLL objective to alleviate it.\", \"the_new_structure_of_the_paper_is_as_follows\": \"1- We state the \\\"finite sample dilemma\\\" of maximum likelihood in the introduction. This replaces the section 2 of the first version of the paper. We mention our contributions and related work also in the introduction.\\n\\n2- In section 2 of this current version we derive the RLL objective for supervised learning and propose a practical stochastic approximation of it.\\n\\n3- In section 3 we extend RLL to unsupervised learning using the auto-encoding framework...this leads us to proposing reflective auto-encoders (RAEs)---a new family of stochastic auto-encoders that do not suffer from posterior collapse.\\n\\n4- We discuss connections to ranking losses in a new section 4.\\n\\n5- We finally present empirical findings in section 5. To save space, we added the table listing the learned topics to the appendix.\"}",
"{\"title\": \"Interesting ideas that need further refinement\", \"review\": \"Summary:\\n\\nThis paper proposes maximizing the \\u201creflective likelihood,\\u201d which the authors define as: E_x E_y [log q(y|x) - \\\\alpha log q(y)] where the expectations are taken over the data, q is the classifier, and \\\\alpha is a weight on the log q(y) term. The paper derives the reflective likelihood for classification models and unsupervised latent variable models. Choices for \\\\alpha are also discussed, and connections are made to ranking losses. Results show superior F1 and perplexity in MNIST classification and 20NewsGroups modeling.\", \"pros\": \"I like how the paper frames the reflective likelihood as a ranking loss. It does seem like subtracting off the marginal probability of y from the conditional likelihood should indeed \\u2018focus\\u2019 the model on the dependent relationship y|x. Can this be further formalized? I would be very interested in seeing a derivation of this kind. \\n\\nI like that the authors test under class imbalance and report F1 metrics in the experiments as it does seem the proposed method operates through better calibration.\", \"cons\": \"My biggest issue with the paper is that I find much of the discussion lacks rigor. I followed the argument through to Equation 3, but then I became confused when the discussion turned to \\u2018dependence paths\\u2019: \\u201cwe want our learning procedure to follow the dependence path\\u2014the subspace in \\u0398 for which inputs and outputs are dependent. However this dependence path is unknown to us; there is nothing in Eq. 1 that guides learning to follow this dependence path instead of following Eq. 3\\u2014the independence path\\u201d (p 3). What are these dependence paths? Can they be defined mathematically in a way that is more direct than switching around the KLD directions in Equations 1-3? Surely any conditional model x-->y has a \\u2018dependence path\\u2019 flowing from y to x, so it seems the paper is trying to make some stronger statement about the conditional structure?\\n\\nMoving on to the proposed reflective likelihood in Equation 4, I could see some connections to Equations 1-3, but I\\u2019m not sure how exactly that final form was settled upon. There seems to be a connection to maximum entropy methods? That is, E_x E_y [log q(y|x) - \\\\alpha log q(y)] = E_x E_y [log q(y|x)] + \\\\alpha E_y [ -log q(y)] \\\\approx E_x E_y [log q(y|x)] + \\\\alpha H[y], if we assume q(y) approximates the empirical distribution of y well. Thus, the objective can be thought of as maximizing the traditional log model probability plus an estimate of the entropy. As there is a long history of maximum entropy methods / classifiers, I\\u2019m surprised there were no mentions or references to this literature. Also, I believe there might be some connections to Bayesian loss calibration / risk by viewing \\\\alpha as a utility function (which is easy to do when it is defined to be data dependent). I\\u2019m less sure about this connection though; see Cobb et al. (2018) (https://arxiv.org/abs/1805.03901) and its citations for references. \\n\\nThe data sets used in the experiments are also somewhat dissatisfying as MNIST and 20NewsGroups are fairly easy to get high-performing models for. I would have liked to have seen more direct analysis / simulation of what we expect from the reflective likelihood. As I mentioned above, I suspect its really providing gains through better calibration---which the authors may recognize as F1 scores are reported and class imbalance tested---but the word \\u2018calibration\\u2019 is never mentioned. More direction comparison against calibration methods such as Platt scaling would be make the experiments have better focus. It would be great to show that this method provides good calibration directly during optimization and doesn\\u2019t need the post-hoc calibration steps that most methods require.\", \"evaluation\": \"While the paper has some interesting ideas, they are not well defined, making the paper unready for publication. Discussion of the connections to calibration and maximum entropy seems like a large piece missing from the paper\\u2019s argument.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"the paper is technically flawed\", \"review\": \"This paper is technically flawed. Here are three key equations from Section 2. The notations are simplified for textual presentation: d \\u2013 p_data; d(y|x) \\u2013 p_d(y|x); m(y|x) \\u2013 p_theta(y|x)\\n\\nmax E_x~d E_y~d(y|x) [ log m (y|x) ] \\t\\t\\t\\t (1) \\nmax E_x~d { E_y~d(y|x) ) [ log d(y|x) ]} - E_y~d(y|x) [ log m (y|x) ]} (2)\\nmax { E_y~d [ log (y) ] - E_y~d log E_x~d(x|y) [ m (y|x) ]} (3)\\n\\nFirst error is that the \\u201cmax\\u201d in (2) and (3) should be \\u201cmin\\u201d. I will assume this minor error is corrected in the following.\\nThe equivalence between (1) and (2) is correct and well-known. The reason is that the first entropy term in (2) does not depend on model. The MAJOR ERROR is that (1) is NOT equivalent to (3). Instead, it is equivalent to the following:\\n\\n min { E_y~d [ log d (y) ] - E_y~d E_x~d(x|y) [ log m (y|x) ]} (3\\u2019)\\n\\nNotice the swap of \\u201cE_x\\u201d and \\u201clog\\u201d. By Jensen\\u2019s nequality, we have \\n\\n log E_x~d(x|y) m (y|x) ] > E_x~d(x|y) [ log m (y|x)\\n - E_y~d log E_x~d(x|y) [ m (y|x) ] < - E_y~d E_x~d(x|y) [ log m (y|x) ] \\n\\nSo, minimizing (3) amounts to minimizing a lower bound of the correct objective (3\\u2019). It does not make sense at all.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"potentially interesting idea, but very confusing in current form\", \"review\": \"The paper proposes a modification of maximum likelihood estimation that encourages estimated predictive/conditional models p(z|x) to have low entropy and/or to maximize mutual information between z and x under the model p_{data}(x)p_{model}(z|x).\\n\\nThere is pre-existing literature on encouraging low entropy / high mutual information in predictive models, suggesting this can indeed be a good idea. The experiments in the current paper are preliminary but encouraging. However, the setting in which the paper presents this approach (section 2.1) does not make any sense. Also see my previous comments.\\n\\n- Please reconsider your motivation for the proposed method. Why does it work? Also please try to make the connection with the existing literature on minimum entropy priors and maximizing mutual information.\\n\\n- Please try to provide some guarantees for the method. Maximum likelihood estimation is consistent: given enough data and a powerful model it will eventually do the right thing. What will your estimator converge to for infinite data and infinitely powerful models?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you AnonReviewer3 for your feedback and for giving us the opportunity to clarify things before you make your decision! There are several points raised in your comment that we would like to provide an answer to.\\n\\n1. \\\"... this statement is false. It's true that the true conditional model p(y|x) is a solution to eq 1,2 and 3, but the converse does not hold...\\\"\\n\\nWe agree with you that statement is misleading and wrong as formulated. It was not meant as a statement on the maxima (clearly Eq. 1 and Eq. 3 may not have the same global maxima; there are simple counterexamples to show this), but as a way to distinguish which paths are followed to maximize Eq. 1. Let us clarify what we mean by this as follows: \\n\\nConsider a supervised learning setting. We have observations (x, y) and our goal is to fit a conditional model p_{\\\\theta}(y | x). We can fit this model by maximizing either (1) or (2) as they are equivalent; they correspond to maximum likelihood estimation. However, in practice, when we learn a conditional model p_{\\\\theta}(y | x) that is arbitrarily flexible (e.g., parameterized by a deep neural network) by optimizing (1) (or equivalently (2)) using data, the resulting model often has the issue that it ignores the inputs x. In this case, the resulting value of the parameters is analogous as if we had optimized (3) instead. In other words, when optimizing (1) with data, nothing guarantees that (3) is not being optimized. This behavior is undesirable. To prevent that, we want to promote a strong dependency between y and x. That is, we propose to avoid the \\\"marginal path\\\", as induced by (3).\\n\\nWe will edit out the statement in the revision and make that part of the paper more clear. \\n\\n2. \\\"You are basically claiming that maximum likelihood is not a consistent estimation method, contradicting all of the statistical literature.\\\"\\n\\nIf by \\\"consistent estimation method\\\" you mean consistency in the statistical sense (i.e. convergence in probability of the estimator to the true parameter as sample size goes to infinity) then no we are not studying consistency/inconsistency of maximum likelihood in the paper. \\n\\nHowever we would like to point out that maximum likelihood does not always lead to consistent estimators. Consider the counterexample of Bahadur, 1958 (see [1] for the reference) showing an example where maximum likelihood is inconsistent. \\n\\n3. \\\"Please clarify your motivation for the proposed method, and let me know if I'm misunderstanding.\\\"\\n\\nThank you for giving us the opportunity to make things more clear. Our motivation for the paper is this: there is this common behavior we call \\u201cinput forgetting\\u201d in the paper that happens quite often with models parameterized by deep neural networks. This is manifested in deep latent variable models as the phenomenon known as \\u201cposterior collapse\\u201d or \\u201clatent variable collapse\\u201d in the literature. This also happens in RBMs (see [2]). However the problem can happen even without latent variables. Some other examples we have not mentioned in the paper include Seq2Seq models where the decoder does not account for the input. A good manifestation of this is in neural conversation models where the decoder provides very generic responses such as \\u201cI don\\u2019t know\\u201d or \\u201cok\\u201d no matter what the query/input is. See for example [3] for more details on generic answers in conversation models. \\n\\nOne common denominator of all these examples is that the variable being conditioned upon is ignored by the deep network. Our paper proposes a regularization approach to mitigate this problem. We add a regularizer termed the \\u201creflective likelihood\\u201d that is basically a marginal distribution over the output variable. We define this marginal in the paper for both supervised and unsupervised learning. The resulting objective is the difference between the usual maximum likelihood objective and this reflective likelihood. Subtracting the reflective likelihood forces the optimization to favor parameter settings that promote usage of the variable being conditioned upon. We validate this hypothesis through our empirical studies where we notice an improvement in terms of latent variable collapse and classification performance for rare classes.\", \"in_summary\": \"for applications where you care about promoting a stronger dependence between inputs and outputs (e.g. in deep latent variable models or in classification under imbalance) then we propose to use the objective proposed in this paper instead of vanilla MLE.\\n\\nWe hope our answer clarifies things. Thank you for bringing these points up. We will add these clarifications in the revision.\\n\\n[1] R. R. Bahadur. Examples of Inconsistency of Maximum Likelihood Estimates. The Indian Journal of Statistics, 1958.\\n[2] K. Cho et al. Enhanced Gradient and Adaptive Learning Rate for Training Restricted Boltzmann Machines. In ICML,2011.\\n[3] J. Li et al. A Diversity-Promoting Objective Function for Neural Conversation Models. In NAACL, 2016.\"}",
"{\"title\": \"please clarify\", \"comment\": \"You write \\\"Maximizing Eq. 1 can be achieved by maximizing either Eq. 2 or Eq. 3 or both\\\", and this seems to be crucial to your motivation for the proposed method. However this statement is false. It's true that the true conditional model p(y|x) is a solution to eq 1,2 and 3, but the converse does not hold: There are many solutions to equation 3 that do not maximize equation 1. You are basically claiming that maximum likelihood is not a consistent estimation method, contradicting all of the statistical literature. Please clarify your motivation for the proposed method, and let me know if I'm misunderstanding.\"}"
]
} |
|
HJej3s09Km | On the effect of the activation function on the distribution of hidden nodes in a deep network | [
"Philip M. Long and Hanie Sedghi"
] | We analyze the joint probability distribution on the lengths of the
vectors of hidden variables in different layers of a fully connected
deep network, when the weights and biases are chosen randomly according to
Gaussian distributions, and the input is binary-valued. We show
that, if the activation function satisfies a minimal set of
assumptions, satisfied by all activation functions that we know that
are used in practice, then, as the width of the network gets large,
the ``length process'' converges in probability to a length map
that is determined as a simple function of the variances of the
random weights and biases, and the activation function.
We also show that this convergence may fail for activation functions
that violate our assumptions. | [
"theory",
"length map",
"initialization"
] | https://openreview.net/pdf?id=HJej3s09Km | https://openreview.net/forum?id=HJej3s09Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xrFvOTk4",
"BJxmsF5kp7",
"HJlETOcJTX",
"rygxqdckaX",
"BklpNmRoh7",
"S1gSYsX52Q",
"HyeYsYKPh7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544550269209,
1541544347214,
1541544124350,
1541544072243,
1541296949318,
1541188477422,
1541015969277
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper744/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper744/Authors"
],
[
"ICLR.cc/2019/Conference/Paper744/Authors"
],
[
"ICLR.cc/2019/Conference/Paper744/Authors"
],
[
"ICLR.cc/2019/Conference/Paper744/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper744/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper744/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"I appreciate that the authors are refuting a technical claim in Poole et al., however the paper has garnered zero enthusiasm the way it is written. I suggest to the authors that they rewrite the paper as a refutation of Poole et al., and name it as such.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Too narrow\"}",
"{\"title\": \"significance, and independence vs. near-independence\", \"comment\": \"We thank this reviewer for the valuable detailed suggestions regarding presentation. Thanks also for pointing out the need to define q_0 before the statement of Theorem 2; q_0 = 1.\\n\\nRegarding motivation, as we pointed out to Reviewer 1, the length map studied in our paper, itself published in NIPS\\u201916, has been repeatedly applied in a long series of papers published in NIPS, ICLR and ICML; we provide a list in the fourth paragraph of our paper. It is therefore noteworthy that the logic supporting this length map published in [14] is not valid, and the claim made about it in that paper is incorrect. This then motivates the question of what similar statement is correct. \\n\\nWe have spoken to two of the authors of [14] about our findings, and neither of them claimed that they meant \\u201cconditionally independent\\u201d when they wrote \\u201cindependent\\u201d. While, on a first reading, this claim in their paper struck us as highly implausible, we felt that it was necessary to prove our claim that their claim was incorrect. The experiments illustrate the strength of some of the effects analyzed in our paper.\\n\\nYou are correct that, as N gets large, the dependence between pairs of preactivation values becomes weaker. But then, when analyzing the next layer, there are competing effects: as N gets larger, the dependence between individual pairs of hidden nodes is approaching zero but the number of such interferences is approaching infinity, but the improved stability obtained by averaging more cases is improving. A rigorous analysis must take account of all of these.\"}",
"{\"title\": \"motivation, and discussion of normalization\", \"comment\": \"Theorem 10 demonstrates a flaw in the logic provided in [14], motivating a new analysis. As discussed in some earlier papers in this line of research, if sigma_w and sigma_b are chosen so that q_{ell} = q_{ell-1}, very deep networks can be trained.\\n\\nWe agree that extending a rigorous analysis to concern batch normalization and weight normalization is an interesting direction for further research. It seems that the most interesting effects would occur during training. Framing this for tractable rigorous analysis looks like an interesting and important challenge.\"}",
"{\"title\": \"motivation for our analysis\", \"comment\": \"The length map studied in our paper, itself published in NIPS\\u201916, has been repeatedly applied in a long series of papers published in NIPS, ICLR and ICML; we provide a list in the fourth paragraph of our paper. It is therefore noteworthy that the logic supporting this length map published in [14] is not valid, and the claim made about it in that paper is incorrect. This in turn motivates the question of what similar statement is correct.\"}",
"{\"title\": \"an abstract analysis that does not aim to derive any conclusions\", \"review\": \"This paper performs an analysis of the length scale of activations for deep fully-connected neural networks with respect to the activation function in neural networks. The authors show that for a very large class of activation functions, the length process converges in probability.\\n\\nI am listing my main concerns about this manuscript below.\\n\\n1. The paper is poorly motivated and does not make an attempt to relate its results to observations in practice or the design of new techniques. It is an abstract analysis of the probability distribution of the activations.\\n\\n2. Theorem 2, which is the main theoretical contribution of the paper, hinges on fixing the inputs of the neural network with weights sampled randomly from a Gaussian distribution. It is difficult to connect this with practice. This is not unreasonable and indeed common in mean-field analyses. However such analyses go further in their implications, e.g., https://arxiv.org/abs/1606.05340, https://arxiv.org/abs/1806.05393 etc. This is my main concern about the paper, its lack of concrete implications despite the simplifying assumptions.\\n\\n3. It would be very interesting if the analysis in this manuscript informs new activation functions or new initialization methods for training deep networks.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"In this paper, the authors studied how the activation function affects the behavior of randomized deep networks.\", \"review\": \"* summary\\nIn this paper, the authors studied how the activation function affects the behavior of randomized deep networks. \\nWhen the activation function is permissible and the weights of DNN are generated from the Gaussian distribution,\\nthe output of each layer was related to the so-called length process. When the permissibility is violated,\\nthe convergence property may not hold. Some numerical experiments confirm the theoretical findings. \\n\\n\\n* comments\\nHowever, The randomized DNN is not clear whether theoretical results in this paper is related to the practical DNN. \\nThe authors showed intensive proofs of theorems.\\nI think that the relation between DNN in practice and the results in this paper should be pursued more. \\n\\n* The meaning of Theorem 10 is not clear. What does the theorem reveal about the ReLU function in the practical usage?\\n\\n* In this paper, a limit theorem in terms of the dimension N is considered.\\n However, the limit theorem in terms of the depth D is also important for the DNN.\\n Some comments on that would be helpful for readers. \\n\\n* Is there any relation between the analysis in this paper and batch normalization or weight normalization?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Technically correct, not well-written\", \"review\": \"Summary: the paper proves the convergence of empirical length map (length process) in NN to the length map for a permissible activation functions in a wide-network limit. The authors also show why the assumptions on the permissible functions can not be relaxed.\", \"quality\": \"the paper seems to be technically correct. However, the authors do not discuss any consequence of their result. Why was it important to prove it? What does it tell us about the networks? While the proof may be of interest to the authors of [14] to correct their (possible) mistakes, I think the paper will go under the radar for most people and thus encourage the authors to heavily revise the paper.\", \"clarity\": \"the writing is clear in general. The proofs sometimes jump over non-trivial things and explain easy steps, but that maybe subjective. The paper spends no effort explaining the contribution and its consequences.\", \"originality\": \"the proven statements are novel and extend/fix the claims of [14]\", \"significance\": \"as said above, I believe that in the current form the paper will have little to no impact. The importance of proving the main statement under more general conditions on activation functions is doubtful and the authors do not comment on that.\", \"minor_comments\": [\"when introducing T{i,:,:} the <> notation is not clear. I could guess it from the later usage of the symbol, but these brackets can mean a lot of things, e.g. bracket mean (Section 2.1)\", \"it would be beneficial to define the main objects, wide-network limits, in a more formal way (Section 2.3)\", \"how the wide regime (large N) is interesting for studying deep NNs? [14] discusses that to some extent, but this should be explained here as well\", \"q_0 is never defined\", \"it's good practice to add numbers to all equations\", \"I believe the claim in the appendix of [14] was meant to be conditionally independent (see also the reviews of [14]). It's clear that preactivations should not be independent and, while technically interesting, spending a page of theorem 10 and on plots seems unnecessary. Even in the paper's example preactivations are uncorrelated in the limit of large N.\", \"I don't see the point of having experiments in this paper. The authors have already proven the fact. Also, it is not clear how to read the plots (no axes, little description) and come to the statements from page 9.\", \"********************\", \"After the authors' response:\", \"If the main motivation of the paper is to fix the mistakes in [14], then the paper should clearly state so, in addition to explaining why fixing is necessary. While I believe that pointing out other paper's mistakes and correcting them is important, the current state of the paper leads me to keeping my initial score and recommending to reject the paper.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
r1lohoCqY7 | Learning-Based Frequency Estimation Algorithms | [
"Chen-Yu Hsu",
"Piotr Indyk",
"Dina Katabi",
"Ali Vakilian"
] | Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning. The problem is typically addressed using streaming algorithms which can process very large data using limited storage. Today's streaming algorithms, however, cannot exploit patterns in their input to improve performance. We propose a new class of algorithms that automatically learn relevant patterns in the input data and use them to improve its frequency estimates. The proposed algorithms combine the benefits of machine learning with the formal guarantees available through algorithm theory. We prove that our learning-based algorithms have lower estimation errors than their non-learning counterparts. We also evaluate our algorithms on two real-world datasets and demonstrate empirically their performance gains. | [
"streaming algorithms",
"heavy-hitters",
"Count-Min",
"Count-Sketch"
] | https://openreview.net/pdf?id=r1lohoCqY7 | https://openreview.net/forum?id=r1lohoCqY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkxo9bEkgE",
"HkeXupr4Am",
"BylBDAB5aQ",
"H1gsmAr9aX",
"Skl8UhSqaX",
"SkgtrSEZ67",
"B1g6Ld0gaX",
"BkeVYzOCnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544663442957,
1542901098770,
1542245980541,
1542245923068,
1542245453856,
1541649728665,
1541625940643,
1541468795593
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper743/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper743/Authors"
],
[
"ICLR.cc/2019/Conference/Paper743/Authors"
],
[
"ICLR.cc/2019/Conference/Paper743/Authors"
],
[
"ICLR.cc/2019/Conference/Paper743/Authors"
],
[
"ICLR.cc/2019/Conference/Paper743/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper743/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper743/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper conveys interesting ideas but reviewers are concern about an incremental nature of results, choice of comparators, and in general empirical and analytical novelty.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Metareview\"}",
"{\"title\": \"Minor updates to the paper\", \"comment\": \"Dear reviewers,\\n\\nThank you again for the thoughtful comments. We made minor updates in the paper (labeled in blue) to address some of the notation issues. We also included more explanation of our problem in the introduction. We hope that this helps clarify any misunderstandings. Please let us know if you have any other comments.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for the thoughtful comments. We are glad that you found our problem interesting, and problem formulation/applications of this research well explained.\", \"regarding_the_competing_algorithms\": \"Both algorithms that we compare to, Count-Sketch and Count-Min, are state-of-the-art hashing-based algorithms (see e.g., Cormode & Hadjieleftheriou (2008)). Further, they are widely used in practice for processing internet traffic, large databases, query logs, web document repositories, etc.\\n\\nTo the best of our knowledge, our paper is the first to use machine learning to design better sketches for any streaming problem. We tried to cover related work thoroughly in section 2.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the thoughtful comments. We are glad that you found our algorithmic approach original, and our experiments promising.\\n\\nRegarding the notation, given that the topic of our paper is inherently interdisciplinary -- spanning machine learning and algorithm theory -- we need to use notions and notation from both communities. This can lead to misunderstandings, but there is no easy way around it. In the paper we tried to follow the notation used in heavy-hitter analysis in algorithm theory to make it easy to compare the analysis to past work. But since there is no standard notation across both fields, it is difficult to find a notation that is easily accessible to both communities. \\n \\nIn addition, there are indeed a few places in the paper where our phrasing could have been better, thank you for pointing this out. We discuss this in more detail below, and hope this should clarify any misunderstandings. \\n\\nRegarding our proofs, they are all self-contained.\\n\\n- The problem setting description is neither formal nor intuitive which made it very hard for me to understand exactly the problem you are trying to solve. Starting with S and i: I guess S and i are both simply varying-length sequences in U.\\n\\nTo clarify, the input S is a sequence *of elements* from some universe U. To give an example, we could have U={0...65535}, in which case the sequence S would consist of integers in the range 0...65535. For example, S = 10101, 21222, 10222, 1, 10, 1, 52233, 62223 is an example sequence of length 8 whose items belong to U.\", \"the_remainder_of_the_problem_definition_is_as_described_in_the_introduction\": \"a frequency estimation algorithm reads the sequence S in one pass, and after that, for any element i from U, reports an estimate of f_i, the number of times element i occurs in S. In the above example, we have, e.g., f_1=2.\\n\\n- In general the intro should focus more on an intuitive (and/or formal) explanation of the problem setting, with some equations that explain the problem you want to work on. Right now it is too heavy on 'related work' (this is just my opinion).\\n\\nThanks for the suggestions. We will include more explanation in the introduction and condense related work while keeping it thorough.\\n\\n- In describing Eqn 3 there are some weird remarks, e.g. \\\"N is the sum of all frequencies\\\". Do you mean that N is the total number of available frequencies? i.e. should it be |D|? It's not clear to me that the sum of frequencies would be bounded if D is not discrete.\\n\\n N is the sum of all frequencies; i.e., N = \\\\sum_{ i \\\\in U } f_i.\\n \\n- Your F and \\\\tilde{f} are introduced as infinite series. Maybe they should be {f1, f2,..., fN}, i.e. N queries, each of which you are trying to be estimate.\\n\\nThe series are indeed finite, we skipped the last index for simplicity. Formally, it should be F = {f_1, \\u2026, f_|U|} and ~F = {~f_1, \\u2026, ~f_|U|}\\n\\n- In general, you have to introduce the notation much more carefully. Your audience should not be expected to be experts in hashing for this venue!! 'C[1,...,B]' is informal abusive notation. You should clearly state using both mathematical notation AND using sentences what each symbol means. \\n\\nAs stated, C[1...B] is a one-dimensional array. Equivalently, it is a B-dimensional vector. We refer to C as an \\u201carray\\u201d as opposed to \\u201cvector\\u201d for the sake of consistency with prior work on frequency estimation, and to avoid nested subscripts. \\n\\nC[b] indeed denotes the b-th element/bin of C. Regarding the notation h: U -> [B] : we use [B] to denote the set {1...B}. We define it in Section 7, but we should have defined it earlier. The formula h: U->[B] indeed denotes a function h that maps elements of U to {1...B}.\\n\\n- Still it is unclear where 'fj' comes from. You need to state in words eg \\\"C[b] contains the accumulation of all fj's such that h(j)=b; i.e. for each sequence j \\\\in U, if the hash function h maps the sequence to bin b (ie $h(j)=b$), then we include the *corresponding frequency* in the sum.\\\"\\n\\nWe hope that after the earlier clarifications, the equation C[b] = sum_{j:h(j)=b} f_j is more clear now. \\n \\n- What I don't understand is how fj is dependent on h. When you say \\\"at the end of the stream\\\", you mean that given S, we are analyzing the frequency of a series of sequences {i_1,...,i_N}?\\n\\nf_j does not depend on h, only on the input sequence S. Since an element j can occur anywhere in S, the equation C[b] = sum_{j:h(j)=b} f_j holds only after the algorithm scans the whole sequence S. \\n\\n- The term \\\"sketch\\\" is used in Algorithm1, like 10, before 'sketch' is defined!!\\n\\nAs explained in the description, items not stored in unique buckets \\u201care fed to the remaining B \\u2212 Br buckets using a conventional frequency estimation algorithm SketchAlg\\u201d. The word \\u201csketch\\u201d in Algorithm 1 refers to the storage used by SketchAlg. To avoid confusion, we will shorten line 10 to \\u201cfeed i to SketchAlg\\u201d.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for the thoughtful comments. We are glad that you found our topic interesting and appreciated our theoretical analysis and experimental results. We address other comments below:\\n\\n[Results are only given for the Zipfian distribution]\\nMany real-world data naturally follow the Zipf\\u2019s Law, as we showed in Figure 5.1 and Figure 5.3 for internet traffic and search query data. Thus, our theoretical analysis assumes item frequencies follow the Zipfian distribution. While our analysis makes this assumption, our algorithm does not have any assumption on the frequency distribution.\\n\\n[Assuming query distribution is the same as data distribution]\\nAs the reviewer pointed out, the query distribution we use is a natural choice. There might be other types of query distributions, such as the one pointed out by the reviewer. Intuitively, our overall approach that separates heavy hitters from the rest should still be beneficial to such query distribution.\\n\\n[Algorithm design]\\nWe agree that our algorithms are relatively simple. We believe this is a feature not a bug: as we showed in Sec. 4.1, our algorithm does not need to be more complex. Specifically, our Learned Count-Min algorithm achieves the same asymptotic error as the \\u201cIdeal Count-Min\\u201d, which is allowed to optimize the whole hash function for the specific given input (Theorem 7.14 and Theorem 8.4 in Table 4.1). The proof of this statement demonstrates that identifying heavy hitters and placing them in unique bins is an (asymptotically) optimal strategy. (In fact, our first attempt at solving the problem was a much more complex algorithm which optimized the allocation of elements to the buckets (i.e., the whole hash function h) to minimize the error. This turned out to be unnecessary, as per the above argument.)\\n\\n[Novelty compared to Mitzenmacher\\u2019 18]\\nOur paper, as well as the works of Kraska et al \\u201918, Mitzenmacher \\u201918, Lykouris &\\nVassilvitskii \\u201918, Purohit et al, NIPS\\u201918, belong to a growing class of studies that use a machine learning oracle to improve the performance of algorithms. All such papers use a learned oracle of some form. The key differences are in what the oracle does, how it is used, and what can be proved about it. In Kraska\\u201918 and Mitzenmacher\\u201918, the oracle tries to directly solve the main problem, which is: \\u201cis the element in the set?\\u201d An analogous approach in our case would be to train an oracle that directly outputs the frequency of each element. However, instead of trying to directly solve the main problem (estimate the frequency of each element), our oracle is a subroutine that tries to predict the best resource allocation --i.e., it tries to answer the question of which elements should be given their own buckets and which should share with others. \\n\\nThere are other differences. For example, the main goal of our algorithm is to reduce collisions between heavy items, as such collisions greatly increase errors. This motivates our design to split heavy and light items using the learned model, and apply separate algorithms for each type. In contrast, in existence indices, all collisions count equally. \\n\\nFinally, our theoretical analysis is different from M'18 due to the intrinsic differences between the two problems, as outlined in the previous paragraph. \\n\\n[The analysis is relatively straightforward]\", \"there_are_three_main_theorems_in_our_paper\": \"Theorem 8.4, Theorem 7.11 and 7.14. Our proofs of Theorem 7.11 and 7.14 are technically involved, even if the techniques are relatively standard. On the other hand, the proof of Theorem 8.4 uses entirely different techniques. In particular, it provides a characterization of the hash function optimized for a particular input.\\n\\n[The machine learned Oracle is assumed to be flawless at identifying the Heavy Hitters]\\nActually, this is not the case. The analysis in the paper already takes into account errors in the machine learning oracle. Please see the 2nd paragraph of Sec. 4.1 and Lemma 7.15. In summary, our results hold even if the learned oracle makes prediction errors with probability O(1/ln(n)). We will revise the text to make it clearer.\"}",
"{\"title\": \"A good problem discussed and the proposed ML approach seems reasonable.\", \"review\": \"The authors are proposing an end-to-end learning-based framework that can be incorporated into all classical frequency estimation algorithms in order to learn the underlying nature of the data in terms of the frequency in data streaming settings and which does not require labeling. According to my understanding, the other classical streaming algorithms also do not require labeling but the novelty here I guess lie in learning the oracle (HH) which feels like a logical thing to do as such learning using neural networks worked well for many other problems.\\n\\nThe problem formulation and applications of this research are well explained and the paper is well written for readers to understand. The experiments show that the learning based approach performs better than their all unlearned versions. \\n\\nBut the only negative aspect is the basis competitor algorithms are very simple in nature without any form of learning and that are very old. So, I am not sure if there are any new machine learning based frequency estimation algorithms.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Unclear problem setting\", \"review\": \"Quality/clarity:\\n- The problem setting description is neither formal nor intuitive which made it very hard for me to understand exactly the problem you are trying to solve. Starting with S and i: I guess S and i are both simply varying-length sequences in U.\\n- In general the intro should focus more on an intuitive (and/or formal) explanation of the problem setting, with some equations that explain the problem you want to work on. Right now it is too heavy on 'related work' (this is just my opinion).\\n\\nOriginality/Significance:\\nI have certainly never seen a ML-based paper on this topic. The idea of 'learning' prior information about the heavy hitters seems original.\", \"pros\": \"It seems like a creative and interesting place to use machine learning. the plots in Figure 5.2 seem promising.\", \"cons\": [\"The formalization in Paragraph 3 of the Intro is not very formal. I guess S and i are both simply varying-length sequences in U.\", \"In general the intro should focus more on an intuitive (and/or formal) explanation of the problem setting, with some equations that explain the problem you want to work on. Right now it is too heavy on 'related work' (this is just my opinion).\", \"-In describing Eqn 3 there are some weird remarks, e.g. \\\"N is the sum of all frequencies\\\". Do you mean that N is the total number of available frequencies? i.e. should it be |D|? It's not clear to me that the sum of frequencies would be bounded if D is not discrete.\", \"Your F and \\\\tilde{f} are introduced as infinite series. Maybe they should be {f1, f2,..., fN}, i.e. N queries, each of which you are trying to be estimate.\", \"In general, you have to introduce the notation much more carefully. Your audience should not be expected to be experts in hashing for this venue!! 'C[1,...,B]' is informal abusive notation. You should clearly state using both mathematical notation AND using sentences what each symbol means. My understanding is that that h:U->b, is a function from universe U to natural number b, where b is an element from the discrete set {1,...,B}, to be used as an index for vector C. The algorithm maintains this vector C\\\\in N^B (ie C is a B-length vector of natural numbers). In other words, h is mapping a varying-length sequence from U to an *index* of the vector C (a.k.a: a bin). Thus C[b] denotes the b-th element/bin of C, and C[h(i)] denotes the h(i)-th element.\", \"Still it is unclear where 'fj' comes from. You need to state in words eg \\\"C[b] contains the accumulation of all fj's such that h(j)=b; i.e. for each sequence j \\\\in U, if the hash function h maps the sequence to bin b (ie $h(j)=b$), then we include the *corresponding frequency* in the sum.\\\"\", \"What I don't understand is how fj is dependent on h. When you say \\\"at the end of the stream\\\", you mean that given S, we are analyzing the frequency of a series of sequences {i_1,...,i_N}?\", \"Sorry, it's just confusing and I didn't really understand \\\"Single Hash Function\\\" from Sec 3.2 until I started typing this out.\", \"The term \\\"sketch\\\" is used in Algorithm1, like 10, before 'sketch' is defined!!\", \"-I'm not going to trudge through the proofs, because I don't think this is self-contained (and I'm clearly not an expert in the area).\"], \"conclusion\": \"Honestly, this paper is very difficult to follow. However to sum up the idea: you want to use deep learning techniques to learn some prior on the hash-estimation problem, in the form of a heavy-hitter oracle. It seems interesting and shows promising results, but the presentation has to be cleaned up for publication in a top ML venue.\\n\\n\\n\\n******\", \"update_after_response\": \"The authors have provided improvements to the introduction of the problem setting, satisfying most of my complaints from before. I am raising my score accordingly, since the paper does present some novel results.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Interesting topic, somewhat trivial algorithms and somewhat narrow results\", \"review\": \"This paper introduces the study of the problem of frequency estimation algorithms with machine learning advice. The problem considered is the standard frequency estimation problem in data streams where the goal is to estimate the frequency of the i-th item up to an additive error, i.e. the |\\\\tilde f_i - f_i| should be minimized where \\\\tilde f_i is the estimate of the true frequency f_i.\", \"pros\": \"-- Interesting topic of using machine learned advice to speed up frequency estimation is considered\\n-- New rigorous bounds are given on the complexity of frequency estimation under Zipfian distribution using machine learned advice\\n-- Experiments are given to justify claimed improvements in performance\", \"cons\": \"-- While the overall claim of the paper in the introduction seems to be to speed up frequency estimation using machine learned advice, results are only given for the Zipfian distribution.\\n\\n-- The overall error model in this paper, which is borrowed from Roy et al. is quite restrictive as at it assumes that the queries to the frequency estimation data structure are coming from the same distribution as that given by f_i\\u2019s themselves. While in some applications this might be natural, this is certainly very restrictive in situations where f_i\\u2019s are updated not just by +/-1 increments but through arbitrary +/-Delta updates, as in this case it might be more natural to assume that the distribution of the queries might be proportional to the frequency that the corresponding coordinate is being updated, for example.\\n\\n-- The algorithm proposed in the paper is very straightforward and just removes heavy hitters using oracle advice and then hashes everything else using the standard CountMin sketch.\\n\\n-- Since CounMin is closely related to Bloom filters the idea of using machine learning to speed it up appears to be noticeably less novel given that for Bloom filters this has already been done by Mitzenmacher\\u201918.\\n\\n-- The analysis is relatively straightforward and boils down to bucketing the error and integration over the buckets.\", \"other_comments\": \"-- The machine learned advice is assumed to be flawless at identifying the Heavy Hitters, authors might want to consider incorporating errors in the analysis.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1esnoAqt7 | Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning | [
"Daniel C. Castro",
"Jeremy Tan",
"Bernhard Kainz",
"Ender Konukoglu",
"Ben Glocker"
] | Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery. However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations. To address this issue we introduce Morpho-MNIST, a framework that aims to answer: "to what extent has my model learned to represent specific factors of variation in the data?" We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity. We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation. | [
"quantitative evaluation",
"diagnostics",
"generative models",
"representation learning",
"morphometrics",
"image perturbations"
] | https://openreview.net/pdf?id=r1esnoAqt7 | https://openreview.net/forum?id=r1esnoAqt7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkego4wwkV",
"SygMazOVR7",
"S1eAb9jpTX",
"rylzwEUXa7",
"SJghAtFWaX",
"SkgGgAJZTm",
"HkehOTkW6X",
"Hygjmwnqhm",
"S1etazq92Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544152215892,
1542910650105,
1542466054005,
1541788761992,
1541671380358,
1541631466367,
1541631348135,
1541224227436,
1541214913481
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper742/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper742/Authors"
],
[
"ICLR.cc/2019/Conference/Paper742/Authors"
],
[
"ICLR.cc/2019/Conference/Paper742/Authors"
],
[
"ICLR.cc/2019/Conference/Paper742/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper742/Authors"
],
[
"ICLR.cc/2019/Conference/Paper742/Authors"
],
[
"ICLR.cc/2019/Conference/Paper742/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper742/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a dataset for measuring disentanglement in learned representations. It consists of MNIST digits, sometimes transformed in various ways, and labeled with a variety of attributes. This dataset is used to measure statistics of various learned models.\\n\\nMeasuring disentanglement is certainly an important problem in our field. This dataset seems to be well designed, and I would recommend its use for papers studying disentanglement. The experiments are well-designed. While the reviewers seem bothered by the fact that it's limited to MNIST, this doesn't strike me as a problem. We continue to learn a lot from MNIST, even today.\\n\\nBut producing a useful dataset isn't by itself a significant enough research contribution for an ICLR paper. I'd recommend publication if (a) it were very different from currently existing datasets, (b) constructing it required overcoming significant technical obstacles, or (c) the dataset led to particularly interesting findings.\\n\\nRegarding (a), there are already datasets of similar complexity which have ground-truth attributes useful for measuring disentanglement, such as dSprites and 3D Faces. Regarding (b), the construction seems technically straightforward. Regarding (c), the experimental findings are plausible and consistent with past findings (which is a good validation of the dataset) but not obviously interesting in their own right.\\n\\nSo overall, this seems like a useful dataset, but I cannot recommend publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"a useful dataset, but not enough of a contribution for an ICLR paper\"}",
"{\"title\": \"Revision uploaded with clarifications\", \"comment\": \"Dear Reviewers,\\n\\nWe have uploaded a revision of our paper, taking into account your feedback and attempting to clarify any misunderstandings, as outlined in our responses below.\\n\\nPlease consider reevaluating the new version, and thank you once again.\"}",
"{\"title\": \"asking for reviewer feedback\", \"comment\": \"Dear Reviewers,\\n\\nWe appreciate your time on assessing our paper, and we would highly value to hear back whether our responses to the criticism and our additional argumentation may change your recommendation. We believe that our work could be of great interest to the ICLR community and may initiate a very much needed discussion about objective and quantitative evaluation in representation learning.\\n\\nAs we believe most of the main criticism was based on misunderstanding that can be easily addressed in an updated version of the paper, we would love to hear what you think. Is there anything you believe we missed in our responses, or any other points you would like us to address in order to improve our paper?\\n\\nMany thanks again for your time and valuable feedback.\"}",
"{\"title\": \"Incorrect assumptions\", \"comment\": \"We thank the reviewer for acknowledging the importance of the problem we aimed to address, however, we very much disagree with the statements made regarding our assumptions.\\n\\nRegarding the reviewer\\u2019s first point, we believe there is a misunderstanding. We absolutely agree that a generative model needs to learn about colour, texture, and low-level pixel relations to be able to extract its representations and to produce reasonable samples. Regarding the reviewer\\u2019s statement that \\u201cthe authors have assumed that the latent space of the generative models are influenced only by the morphological properties of the image\\u201d, we would like to stress that we never made such assumptions nor have we claimed that the latent space of models trained on MNIST capture exclusively shape variations. What the Morpho-MNIST methodology aims to answer is: \\u201cto what extent has my model learned to represent these specific factors of variation in the data?\\u201d If colour and texture are important factors for a given application or dataset, it suffices to design the relevant scalar metrics and include them in the very same framework.\\n\\nThis brings us to the second point. As far as we are aware, this is the first attempt in *any* context to quantitatively characterise inferential and generative behaviour of learned representations. We propose to do it in terms of measurable features: here we exploit shape attributes, and in the conclusion we point to various possible extensions involving colours or object properties. In our view, it just makes sense that the first step in that direction builds on a simple dataset with well understood and easily measurable factors of variation.\\n\\nFinally, although it is correct that there are no generalisability guarantees, that is the case for any model evaluated on MNIST, CIFAR-10, or even ImageNet (cf. https://arxiv.org/abs/1806.00451, for example). As argued above, we are proposing a toolset to inspect and diagnose trained generative models that works with any collection of measurable attributes. Evidently conclusions may not be transferable if the datasets have different relevant attributes.\"}",
"{\"title\": \"Review: Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning\", \"review\": \"This paper discusses the problem of evaluating and diagnosing the representations learnt using a generative model. This is a very important and necessary problem.\\n\\nHowever, this paper lacks in terms of experimental evaluation and has some technical flaws.\\n1. Morphological properties deals with only the \\\"shape\\\" properties of the image object. However, when the entire image is subject to the generative model, it learns multiple properties from the image apart from shape too - such as texture and color. Additionally, there are lot of low level pixel relations that the model learns to fit the distribution of the given images. However, here the authors have assumed that the latent space of the generative models are influenced only by the morphological properties of the image - which is wrong. Latent space features could be affected by the color or texture of the image as well.\\n\\n2. Extracting morphological properties of the image is straight-foward for MNIST kind of objects. However, it becomes really difficult for other datasets such as CIFAR or some real world images. Studying the properties of a generative model on such datasets is very challenging and the authors have not added a discussion around that. \\n\\n3. Now assuming that my GAN model has learnt good representation in Morpho-MNIST dataset, is it guaranteed to learn good representations in other datasets as well? There is no guarantee on generalizability or extensibility of the work.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Inadequate review, main contributions ignored\", \"comment\": \"With all due respect, the reviewer\\u2019s summary of our work is inaccurate, incomplete and ignores the key part of our contributions. This suggests that the reviewer may not have carefully read our paper. The review consists of three lines, which we will address below. We strongly believe that the overall recommendation based on the reviewer\\u2019s comments is unjustified.\\n\\nRegarding the first statement \\u201cI am not sure generating data is sufficient contribution for a paper\\u201d:\\n\\nGenerating a dataset is a relatively minor portion of this work; please refer to Sec. 1.1 for a clear outline of the main points. The bulk of our paper is about how morphometrics enable quantitative evaluation of learned representations (Section 4), and how the proposed image perturbations can enrich this evaluation and also open up a variety of new supervised tasks (explored in Appendix D). \\n\\nRegarding the second statement \\u201cI am not sure what conclusion I should draw from Fig 5 and Fig 6 about the data\\u201d:\\n\\nAs described clearly in the text, Fig. 5 shows quantitative results for _inferential_ disentanglement (i.e. from real MNIST test data to latent codes) of two different InfoGANs, and Fig. 6 for _generative_ disentanglement (from latent codes to generated samples). In the paper we state \\u201cas the tables are mostly indistinguishable, we may argue that in this case the inference and generator networks have learned to consistently encode and decode the digit shape attributes.\\u201d To the best of our knowledge, this is the first time that partial correlations have been used to illustrate and quantitatively characterise the performance of representation learning, thanks to extracted morphometric attributes as proposed in the paper.\\n\\nThe final statement \\u201cEventually this data can become a benchmark data when it is paired with a method. Then that method/data are a benchmark\\u201d is not very clear to us. Our paper introduces both a novel quantitative assessment _methodology_ and new _datasets_ for experimentation and benchmarking of representation learning methods. We provide baseline results for recent approaches, including different variants of GANs and VAEs.\\n\\nIn this light, we would like to reiterate that we strongly believe the reviewer\\u2019s assessment of our paper is inadequate.\"}",
"{\"title\": \"Representation learning vs. sample quality\", \"comment\": \"We appreciate the thoughtful review and suggestions for adding clarifications, in particular with respect to the aspect of natural image generation.\\n\\nWe completely agree that MNIST generation is not to be mistaken as a surrogate for the generation of natural images, and nowhere in our paper did we intend to suggest otherwise. We thank the reviewer for bringing up this potential confusion, and we will add further clarification to make sure there is no ambiguity about this in our paper.\\n\\nAs a matter of fact, we believe that the current focus of research towards generating natural images can be misleading (or at least gives an incomplete picture) in the context of representation learning, as the quality of sampled images generally tells us little about how well the learned representations capture the known factors of variation in the training distribution.\\n\\nWith Morpho-MNIST, we aimed to address this issue by providing an objective methodology for evaluating representation learning, i.e. quantitatively measuring how expressive a trained generative model is and how well it covers the variability in the data, in our case defined by morphometry of shapes represented as grayscale images. In fact, we make no statements about measuring sample _quality_, only the _diversity_ of shape attributes. Our conclusion does point to possible extensions of this framework to other measurable content attributes for different types of images.\\n\\nThe fact that VAE and GAN performed similarly under our metrics shows only that, for this data and similar model capacities, the representations they learned are comparably expressive. We actually believe this is an important message to convey, as a large body of work focusing on the crispness of generated images might incorrectly lead to a conclusion that VAEs are generally inferior to GANs with respect to representation learning. Here, we can demonstrate quantitatively that for the considered type of distribution (morphometry of rasterised shapes) this is not the case.\\n\\nOn your point of \\u201cwhether evaluating on MNIST is a good proxy for the performance of the model on colored images with backgrounds or not\\u201d we would say the answer is clearly no. But, as mentioned above and hopefully made more clear in our revision, it was never our intention to imply otherwise. We also couldn\\u2019t agree more with your statement \\u201cI'm not convinced that [the] ability of a model in disentangling thickness correlates to their ability in natural image generation.\\u201d These are very different problems.\\n\\nHowever, we would like to reiterate some of our reasons for focusing on MNIST, as presented in the introduction: few and simple factors of variation, sufficient size for its complexity, low computational requirements, and availability. Importantly, MNIST is a standard baseline on which a great number of generative models proposed in the literature are evaluated. This means that our framework can also be applied to these models retrospectively, adding novel insights about their performance in a more objective and quantitative manner. We do believe this is an important contribution to the area of representation learning.\\n\\nHere are a few such prominent works, in addition to the ones cited in our paper:\\n- Goodfellow et al. (NIPS 2014). Generative Adversarial Nets.\\n- Nowozin et al. (NIPS 2016). f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization.\\n- Radford et al. (ICLR 2016). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.\\n- Salimans et al. (NIPS 2016). Improved Techniques for Training GANs.\\n- Rezende & Mohamed (ICML 2015). Variational Inference with Normalizing Flows.\\n- Mescheder et al. (ICML 2017). Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks.\"}",
"{\"title\": \"Interesting characterisation and extension of MNIST\", \"review\": \"Authors present a set of criteria to categorize MNISt digists (e.g. slant, stroke length, ..) and a set of interesting perturbations (swelling, fractures, ...) to modify MNIST dataset. They suggest analysing performance of generative models based on these tools. By extracting this kind of features, they effectively decrease the dimmension of data. Therefore, statistically comparing the distribution of generated vs test data and binning the generated data is now possible. They perform a thorough study regarding MNIST. Their tools are a handy addition to the analytical surveys in several applications (e.g. how classification fails), but not convincingly for generation.\\n\\nSince their method is manually designed for MNIST, the manuscript would benefit from a justification or discussion on the common pitfalls and the correlation between MNIST generation and more complex natural image generation tasks. Since the presented metrics do not show a significant difference between the VAE and Vanilla GAN model, the question remains whether evaluating on MNIST is a good proxy for the performance of the model on colored images with backgrounds or not. For example sharpness and attending to details is not typically a challenge in MNIST generation where in other datasets this is usually the first challenge to be addressed. I'm not convinced that ability of a model in disentangling thickness correlates to their ability in natural image generation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"not enough contribution\", \"review\": [\"The author proposed an extended version of MNIS where they introduced thickening/thinning/swelling/fracture. The operation is done using binary morphological operations.\", \"Providing benchmark data for tasks such disentanglement is important but I am not sure generating data is sufficient contribution for a paper.\", \"I am not sure what conclusion I should draw from Fig 5 and Fig 6 about the data.\", \"Eventually this data can become a benchmark data when it is paired with a method. Then that method/data are a benchmark.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1lo3sC9KX | Asynchronous SGD without gradient delay for efficient distributed training | [
"Roman Talyansky",
"Pavel Kisilev",
"Zach Melamed",
"Natan Peterfreund",
"Uri Verner"
] | Asynchronous distributed gradient descent algorithms for training of deep neural
networks are usually considered as inefficient, mainly because of the Gradient delay
problem. In this paper, we propose a novel asynchronous distributed algorithm
that tackles this limitation by well-thought-out averaging of model updates, computed
by workers. The algorithm allows computing gradients along the process
of gradient merge, thus, reducing or even completely eliminating worker idle time
due to communication overhead, which is a pitfall of existing asynchronous methods.
We provide theoretical analysis of the proposed asynchronous algorithm,
and show its regret bounds. According to our analysis, the crucial parameter for
keeping high convergence rate is the maximal discrepancy between local parameter
vectors of any pair of workers. As long as it is kept relatively small, the
convergence rate of the algorithm is shown to be the same as the one of a sequential
online learning. Furthermore, in our algorithm, this discrepancy is bounded
by an expression that involves the staleness parameter of the algorithm, and is
independent on the number of workers. This is the main differentiator between
our approach and other solutions, such as Elastic Asynchronous SGD or Downpour
SGD, in which that maximal discrepancy is bounded by an expression that
depends on the number of workers, due to gradient delay problem. To demonstrate
effectiveness of our approach, we conduct a series of experiments on image
classification task on a cluster with 4 machines, equipped with a commodity communication
switch and with a single GPU card per machine. Our experiments
show a linear scaling on 4-machine cluster without sacrificing the test accuracy,
while eliminating almost completely worker idle time. Since our method allows
using commodity communication switch, it paves a way for large scale distributed
training performed on commodity clusters. | [
"SGD",
"distributed asynchronous training",
"deep learning",
"optimisation"
] | https://openreview.net/pdf?id=H1lo3sC9KX | https://openreview.net/forum?id=H1lo3sC9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgxb5znkE",
"SJeYoULs2m",
"ByeWt-9S2Q",
"H1lgeOmEo7"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544460792329,
1541265057018,
1540886904783,
1539745767796
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper740/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper740/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper740/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper740/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Improving the staleness of asynchronous SGD is an important topic. This paper proposed an algorithm to restrict the staleness and provided theoretical analysis. However, the reviewers did not consider the proposed algorithm a significant contribution. The paper still did not solve the staleness problem, and it was lack of discussion or experimental comparison with the state of the art ASGD algorithms. Reviewer 3 also found the explanation of the algorithm hard to follow.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not significant contribution and not sufficient experiments\"}",
"{\"title\": \"Interesting paper but the contribution seems not be good enough\", \"review\": \"Overall, this paper is well written and clearly present their main contribution.\\nHowever, the novel asynchronous distributed algorithm seems not be significant enough.\\nThe delayed gradient condition has been widely discussed, but there are not enough comparison between these variants.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"missing references, theory is not novel, experiments are not sufficient\", \"review\": [\"The paper proposes an algorithm to restrict the staleness in ASGD (asynchronous SGD), and also provides theoretical analysis. This is an interesting and important topic. However, I do not feel that this paper solves the fundamental issue - the staleness will be still very larger or some workers need to stay idle for a long time in the proposed algorithm if there exists some extremely slow worker. To me, the proposed algorithm is more or less just one implementation of ASGD, rather than a new algorithm. The key trick in the algorithm is collecting all workers' gradients in the master machine and update them at once, while hard limiting the number of updates in each worker. The theoretical analysis is not brand new. The\", \"line 6 in Algorithm 1 makes the delay a random variable related to the speed of a worker. The faster a worker is, the larger the tau is, which invalidates the assumption implicitly used in the theoretical analysis.\", \"The experiment is done with up to 4 workers, which is not sufficient to validate the advantages of the proposed algorithm compared to state of the art ASGD algorithms. The comparison to other ASGD implementations is also missing, such as Hogwild! and Allreduce.\", \"In addition, I am so surprised that this paper only have 10 references (the last one is duplicated). The literature review is quite shallow and many important work about ASGD are missing, e.g.,\", \"Parallel and distributed computation: numerical methods, 1989.\", \"Distributed delayed stochastic optimization, NIPS 2011.\", \"Hogwild!, NIPS 2011\", \"Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization, NIPS 2015\", \"An asynchronous mini-batch algorithm for regularized stochastic optimization, 2016.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"I don't understand why the proposed method is an asynchronous method\", \"review\": \"This paper tries to propose a so-called hybrid algorithm to eliminate the gradient delay of asynchronous methods. The authors propose algorithm 1 and a simplified version algorithm 2 and prove the convergence of algorithm 2 in the paper. The paper is very hard to follow, especially the algorithm description part. What I can understand is that the authors want to let the fast workers do more local updates until the computation in the slowest worker is done. The idea is similar to EASGD except that it forces the workers to communicate the server once the slowest one has completed their job.\", \"the_following_are_my_concerns\": \"1. Do you consider the overhead in constructing the communication between machines? in your method, workers are keeping notifying servers that they are done with the computation. \\n2. In Algorithm 1 line 9 and line 23, there are two assignments: x_init =x and x_init=ps.x, is there any conflict? \\n3. In Algorithm 2, at line 6 workers wait to receive ps.x, at line 20 server wait for updates. I think there is a bug, and nothing can be received at both ends.\\n4. The experiments are too weak. There is no comparison between other related methods, such as downpour, easgd.\\n5. The authors test resnet50 on cifar10, however, there is no accuracy result. They show the result by using googlenet, why not resnet50? I am curious about the experimental settings.\\n\\nAbove all, the paper is hard to follow and the idea is very trivial. Experiments in the paper are also very weak.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rkxjnjA5KQ | Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation | [
"Shani Gamrian",
"Yoav Goldberg"
] | Deep Reinforcement Learning has managed to achieve state-of-the-art results in learning control policies directly from raw pixels. However, despite its remarkable success, it fails to generalize, a fundamental component required in a stable Artificial Intelligence system. Using the Atari game Breakout, we demonstrate the difficulty of a trained agent in adjusting to simple modifications in the raw image, ones that a human could adapt to trivially. In transfer learning, the goal is to use the knowledge gained from the source task to make the training of the target task faster and better. We show that using various forms of fine-tuning, a common method for transfer learning, is not effective for adapting to such small visual changes. In fact, it is often easier to re-train the agent from scratch than to fine-tune a trained agent. We suggest that in some cases transfer learning can be improved by adding a dedicated component whose goal is to learn to visually map between the known domain and the new one. Concretely, we use Unaligned Generative Adversarial Networks (GANs) to create a mapping function to translate images in the target task to corresponding images in the source task. These mapping functions allow us to transform between various variations of the Breakout game, as well as between different levels of a Nintendo game, Road Fighter. We show that learning this mapping is substantially more efficient than re-training. A visualization of a trained agent playing Breakout and Road Fighter, with and without the GAN transfer, can be seen in \url{https://streamable.com/msgtm} and \url{https://streamable.com/5e2ka}. | [
"Transfer Learning",
"Reinforcement Learning",
"Generative Adversarial Networks",
"Video Games"
] | https://openreview.net/pdf?id=rkxjnjA5KQ | https://openreview.net/forum?id=rkxjnjA5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xASVnSxE",
"rkxuiMsdAQ",
"Bkg8xwAHT7",
"ryx5jL0STm",
"rJEql0HT7",
"HyerEyRB6m",
"H1xIHS96hQ",
"rkeqDrUTnQ",
"HJx6ByVYhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545090118477,
1543185056083,
1541953261878,
1541953185849,
1541951627515,
1541951277205,
1541412157760,
1541395810175,
1541123909495
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper739/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper739/Authors"
],
[
"ICLR.cc/2019/Conference/Paper739/Authors"
],
[
"ICLR.cc/2019/Conference/Paper739/Authors"
],
[
"ICLR.cc/2019/Conference/Paper739/Authors"
],
[
"ICLR.cc/2019/Conference/Paper739/Authors"
],
[
"ICLR.cc/2019/Conference/Paper739/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper739/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper739/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an transfer learning approach to reinforcement learning, where observations from a target domain are mapped to a source domain in which the algorithm was originally trained. Using unsupervised GAN models to learn this mapping from unaligned samples, the authors show that such a mapping allows the RL agent to successfully interact with the target domain without further training (apart from training the GAN models). The approach is empirically validated on modified versions of the Atari game breakout, as well as subsequent levels of Road Fighter, showing good performance on the transfer domain with a fraction of the samples that would be required for retraining the RL algorithm from scratch.\\n\\nThe reviewers and AC note the strong motivation for this work and emphasize that they find the idea interesting and novel. Reviewer 3 emphasizes the detailed analysis and results. Reviewer 2 notes the innovative idea to evaluate GANs in this application domain. Reviewer 1 identifies a key contribution in the thorough empirical analysis of the generalization issues that plaque current RL algorithms, as well as the comparison between different GAN models and finding their performance to be task-specific.\", \"the_reviewers_and_ac_noted_several_potential_weaknesses\": \"The proposed training based on images collected by an untrained agent focus the data on experience that agents would see very early on in the game, and may lead to generalization issues in more advanced parts of the game. Indeed these generalization issues are one possible explanation for the discrepancies between qualitative and quantitative results noted by reviewer 1. While the quantitative results indicate good performance on the target task, the image to image translation makes substantial errors, e.g., hallucinating blocks in breakout and erasing cars in Road Fighter. To the AC, the current paper does not provide enough insight into why the translation approach works even in cases where key elements are added or removed from the scene. The paper would benefit from a revision that thoroughly analyses such cases as well as the reason why the trained RL policy is able to generalize to them.\\n\\nR1 further notes that the paper does not address the RL generalization issue, but rather presents an empirical study that shows that in specific cases it is easier to translate from a target to a source domain, than to learn a policy for the target domain. The AC shares this concern, especially given the limited error analysis and conceptual insights derived from the empirical study. There are further concerns about the experimental protocol and hyper-parameter selection on the target tasks. Finally reviewer 1 questions the claim of whether data efficiency matters more than training efficiency in the proposed setting.\\n\\nThere is disagreement about this paper. Reviewers 2 and 3 gave high scores and positive reviews, but did not provide sufficient feedback to the concerns raised by reviewer 1, who put forward significant concerns. \\n\\nThe AC is particularly concerned about the experimental protocol and hyper-parameter tuning directly on the test tasks. The authors counter this point by noting that \\\"We agree that selecting configurations based on the test set is far from ideal, but we also note that this is the de-facto standard in video game-playing RL works, so we do not believe our work is any worse than others in the literature in this regard.\\\" The AC worries about the lack of motivation to identify a strong empirical setup to arrive at the strongest possible contribution. A key concern here is that the results seem to vary substantially by task, GAN model used, etc. and substantial tuning on the target domain seems to be required. This makes it hard to draw any generalizable conclusions. This concern can be alleviated by including additional analysis, e.g., error analysis of where a proposed approach fails, or additional experiments designed to isolate the factors that contribute to a particular performance level. However, the current paper does not go to this detail of empirical exploration. Given these concerns, I recommend not accepting the paper at the current stage.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting result, too little conceptual contribution\"}",
"{\"title\": \"Re: Post-rebuttal Recommendation\", \"comment\": \"Thank you for taking our comments seriously and bumping your score.\\n\\nWe updated the paper's related work section with a discussion of Bousmalis et al and similar work, and how it differs from our learning setup and from our proposal (last paragraph of that section).\\n\\nWe are sorry that you feel that we dismissed your concerns about the evaluation protocol, but it is indeed a de-facto standard in RL works, and we are not sure how to resolve this issue. If we misunderstood your concern, or if you have a suggestion on how to address this issue, we'd be very grateful to hear them.\\n\\nRegarding the generalization issue, let us attempt to clarify our argument. RL agents trained on pixels are very sensitive to the specifics of the data, and fail to generalize even to small variations. As a result, a trained agent is restricted to the data it was trained on. One strategy for attacking this problem is to adapt the RL training process to make the agent less overfitted to nuances of the training domain. This area is explored by others. We propose a different strategy, which we have not seen before, and show it is effective: rather than changing the training procedure, we use the existing agent but change how it perceives the environment by training and unaligned GAN to map between the target and source environments. The generalization across domains is performed by the unaligned GAN training. \\nWe show that this strategy is effective for the RL setup, both on the somewhat artificial modified-breakout setup, and on the realistic and challenging game-level transfer scenario. \\n\\nFinally, we indeed do not propose modified GAN or RL methods, but instead use well established techniques, which we combine in a novel way, to produce a novel result. We do not see why this is seen as a deficiency of our work: there is already a very large number of GAN variants and RL variants, most of them do not seem to improve in any substantial way over the variants that we used, or to generalize beyond the paper that introduced them. We do not see a reason to add another small-and-quickly-forgotten variant to the literature just to seem more \\\"technically strong\\\" while a simple approach that re-uses existing components suffices. Our aim in this work is to propose a new method for transferring knowledge across related RL domains. We show that our proposal is vastly superior to existing approaches (fine-tuning), and also substantially more sample efficient than training from scratch, and despite the current limitations of the state-of-the-art components that we used. We believe this is sufficiently interesting without \\\"beefing it up\\\" with a new GAN architecture.\"}",
"{\"title\": \"Authors' response to Reviewer1 - part 1\", \"comment\": \"Thank you for your time in reviewing our paper.\", \"discrepancy_between_quantitative_and_qualitative_results\": \"While we agree that the transferred agent does not perform perfectly, we disagree that it succeeds \\u201conly due to biases in the data\\u201d. We note that (a) staying in the middle of the road is, in fact, a learned policy that took the RL agent many (a few million) game interaction frames to acquire; and (b) the agent does more than just driving at the middle of the road: it also occasionally steers to avoid obstacles, and, when hitting an obstacle such as a car, it does not immediately crash but actually succeeds to recover from the hit and keep driving in many cases. This latter behavior (recovering from hits) is also a learned skill that was discovered by the RL policy when training on the first level of the game, and which again took the agent millions of frames (and tens of thousands of crashes) to acquire. We do not consider these behaviors as \\u201cbiases in the data\\u201d but rather behaviors needed to play the game. We note that, as stated in the paper, RL training requires 10M-15M iterations to achieve 5000 points in levels 2 and 4, and substantially more than that in level 3. Using our method the agent is able to apply the abilities it learned during training on level 1 (achieving scores of 5350, 5350 and 2050) after only hundred of thousands of GAN iterations and very few additional game interactions -- a significant improvement to the millions or tens of millions interaction frames needed when using RL training. Moreover, as can be seen in the videos that when using fine-tuning the agent completely fails - it crashes when hitting the sideways, even when the road is wide. Therefore, a method that overcomes this obstacle and others is considered a success. We also note that visually translating a narrow road to a wide one is, in fact, a very good strategy, provided that the position of the car on the road is correctly kept. Dealing with curve roads is more challenging, as the original agent indeed hardly observed any in its training.\", \"regarding_the_deficiencies_in_gan_training\": \"in Breakout, we train the model to translate images where most bricks exist and it successfully accomplishes that for similar images during testing. The problem occurs when the generator is introduced with images where many bricks are missing, images that are different from the ones it has seen during training. More generally, we indeed observed that the CycleGAN is much harder to train on seemingly easy tasks such as the breakout transfer compared to results on natural images (or even for the road-fighter game). We consistently find that such tasks are harder for GANs. The lack of diversity in our game-based datasets makes it easier for the model to overfit comparing to the datasets used in the original papers. Moreover, the fine-grained details are both much more important in the game-transfer setting *and* are easier to notice for a human observer, compared to the natural \\u201chorse to zebra\\u201d transfer where deficiencies in the background are easier to miss and easier to forgive.\", \"does_not_address_the_rl_generalization_issues\": \"Our paper demonstrates how the generalization problem exists in model-free deep RL algorithms due to an extreme reliance on visual details, and proposes a way to decouple the visual reliance to some extent by performing a visual mapping between related tasks. This helps to transfer the obtained non-visual knowledge across tasks. Our approach isn\\u2019t meant to solve the generalization problem, but given an overfitted model it\\u2019s a novel way to still be able to benefit from previous learning when learning a new task. We pointed out the issue of generalization and overfitting to explain the motivation for transfer approaches in this field.\\n\\nWith regards to works such as Bousmalis et al (https://arxiv.org/abs/1612.05424) - we note that the RL setting is different than the static classification one. We did try to improve the GAN generation by adding a loss component comparing the A3C classifier results on the original and translated image, and several other approaches. These additions did not improve the results and in some cases, the results were better without them. In RL tasks, in comparison to the tasks mentioned in the Bousmalis et al, the examples are revealed only as the agent improves and goes further in the task. For this reason, we can only collect data from the early stages of the game where the optimal actions are, for example, to stay in the middle of the road and drive fast (in Road Fighter) - the exact abilities we achieved without it. The obstacles only appear in further stages, which is why this and similar domain adaptation works would not benefit to the zero-shot transfer we aim to gain. We will mention Bousmalis et al in the paper.\"}",
"{\"title\": \"Authors' response to Reviewer1 - part 2\", \"comment\": \"Experimental protocol:\\nWe agree that selecting configurations based on the test set is far from ideal, but we also note that this is the de-facto standard in video game-playing RL works, so we do not believe our work is any worse than others in the literature in this regard. \\nRegarding \\u201coptimal methods (e.g., choice of GAN, number of iterations) vary significantly depending on the task\\u201d - indeed, in current GANs works, configurations (and even pre-processing in many cases) change between tasks. Given the amount of interest in GAN research, we expect this aspect to improve over time. However, we stress that in the Experiments section (Section 4) we test our approach using UNIT-GAN *only*. We test different GANs only in Section 5 as we evaluate them by comparing their results on our tasks.\", \"data_efficiency_vs_actual_training_efficiency\": \"The training process of the actor-critic algorithms mainly depends on CPU where the GAN is trained using GPU therefore, you cannot compare the time on the same hardware. We agree that GANs today still suffers from many issues and are not stable enough, we also mention some of their limitation in Section 4. As noted above, we expect these aspects of GANs to improve. Despite the limitations, this method still manages to succeed in most tasks and clearly shows how transfer can be achieved by a visual mapping.\\n\\nMore generally, we presented a novel transfer approach for model-free RL that decouples the visual transfer from the policy transfer, by using unaligned GANs. While the approach is not perfect (and we explicitly discuss many of the points raised by rev1 in the paper), the method is effective, and, to our knowledge, has not been proposed or demonstrated to work before in the context of RL. Rev1 seems to dislike the fact that we did not address the \\u201ccore\\u201d problems of model-free RL directly, but rather proposed a \\u201cworkaround\\u201d in the form of GAN-based mapping which is external to the RL process. In contrast, we see precisely this separation as the main idea and strength of our proposal: we let the agent re-use its learned policy by helping it map the new environment to its \\u201cprevious experiences\\u201d. Furthermore, rev1 dismisses our reported quantitative results because, in their opinion, they are not reflected in the videos. However, we argue that both videos reflect the success of our approach -- the Breakout video clearly shows how the agent follows the learned policy perfectly and the Road Fighter video demonstrate how the agent applies the learned techniques from level 1 in each of the successive levels. It is possible that future iterations of the idea will improve on our method with more complex machinery, and we look forward to seeing others expand on our research.\"}",
"{\"title\": \"Authors' response to Reviewer2\", \"comment\": [\"Thank you for your time in reviewing our paper.\", \"Although the results between the runs are different, in all cases the fine-tuning results don\\u2019t outperform training from scratch. We would expect a generalized model to adjust quickly to small modifications in the images since most pixels are the same as in the original games and the dynamics didn\\u2019t change. Instead, the model behaves as if the modified games are completely new tasks.\", \"Thank you for the corrections, will change the paragraph accordingly.\"]}",
"{\"title\": \"Authors' response to Reviewer3\", \"comment\": \"Thank you for your time in reviewing our paper.\\n\\n1. Our paper discusses tasks in which the inputs of the model in each step are raw pixels. The A3C model learned the detail and noise in the training images to the extent that small and insignificant modifications create data that is unrecognizable to the model, which prevent the model from following the policy it learned and making optimal decisions.\\n\\n2. Figure 2 presents the results of the RL agents during training. In that matter, our approach is a zero-shot transfer approach in which there is no need for any additional RL training. The number of GAN iterations needed for the maximum scores and # of iterations for each task is presented in Table 1, 2.\\n\\n3. Yes, if the agent is allowed to collect images from later stages of the game it needs to transfer to, results improve a little. However, we did not consider this a realistic scenario, because the untrained agent cannot reach this stages. The idea of human demonstration is an interesting one, and we expect it could work well.\\n\\n4. We will share source code upon publication.\"}",
"{\"title\": \"An interesting method to improve transfer learning between related tasks. The motivation is strong, explanations are intuitive, technical parts are solid, experiments are sufficient.\", \"review\": \"This paper propose an intermediate stage before transfer learning on playing new games that is with slight visual change. The intermediate stage is basically a mapping function to translate the images in the new game to old game with certain correspondence. The paper claims that the adding of intermediate stage can be much more efficient than re-train the model instead.\\nThen the paper compares different baselines without the mapping model. The baselines are either re-trained from scratch or (partially) initialized with trained model. The learning curves show that fine-tuning fails to transfer knowledge from the source domain to target domain. The mapping model is constructed based on unaligned GAN. And the experiments are setup and results are shown.\", \"pros\": [\"The paper makes a very good start from analogizing human being adjusting himself between similar tasks.\", \"The paper demonstrates strong motivation on improving the existing transfer learnings that are either fail or take too much time to train from scratch.\", \"The paper clearly illustrate the learning curve of multiple approaches for transferring knowledge across tasks.\", \"The paper proves detailed analysis why using unaligned GAN to learn the mapping model, and gives\", \"I also like the experiment section. It is well written, especially the discussions section answer all my questions.\"], \"questions\": \"1.\\tWhy fine-tuning from a model that is trained from related task does not help, even decelerate the learning process? Could you explain it more?\\n2.\\tCould you please also include in figure 2 the proposed transfer learning curve with the mapping model G? I\\u2019m curious how much faster it will converge than the Full-FT. And I suppose the retrain from scratch can be extremely slow and will exceed the training epoch scope in the figure.\\n3.\\tIn dataset collection, you use untrained agent to collect source domain image. Will it improve the results if you use well trained agent, or even human agent, instead? \\n4.\\tI hope, if possible, you can share the source code in the near future.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"TRANSFER LEARNING FOR RELATED REINFORCEMENT LEARNING TASKS VIA IMAGE-TO-IMAGE TRANSLATION\", \"review\": \"The paper seeks to generalize the reinforcement learning agents to related tasks. The authors first show the failure of conventional transfer learning techniques, then use GANs to translate the images in the target task to those in the source task. It is an interesting attempt to use the style-transferred images for generalization of RL agents. The paper is well written and easy to follow.\", \"pros\": \"1.\\tIt is a novel attempt to use GANs to generate pictures that help RL agents transfer the policies to other related environments.\\n2.\\tIt is an interesting viewpoint to use the performance of RL agent to evaluate the quality of images generated by GANS.\", \"cons\": \"1.\\tThe pictures generated by GANs can be hardly controlled, and extra noise or unseen objects might be generated, and may fool the RL agent during training.\", \"other_feedback\": \"In Figure 2, it seems the fine-tuning methods also achieve comparable results (Full-FT and Partial-FT), such as Figure 2(b) and Figure 2(c). Besides, the plot is only averaged over 3 runs, whereas the areas of standard deviation still overlap with each other. It may not be convincing enough to claim the failure of fine-tuning methods.\", \"minor_typos\": \"1.\\tIn 2.1, second paragraph: 80x80 -> $80 \\\\times 80$\\n2.\\tIn 2.1, second paragraph: chose -> choose\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Problematic qualitative results, generic unsupervised domain adaptation\", \"review\": \"# Summary\\n\\nThis paper proposes to improve the sample efficiency of transfer learning for Deep RL by mapping a new visual domain (target) onto the training one (source) using GANs. First, a deep RL policy is trained on a source domain (e.g., level 1 of the Atari Road Fighter game). Second, a GAN (e.g. UNIT or CycleGAN) is trained for unsupervised domain adaptation from target images (e.g., level 2 of Road Fighter) to source ones. Third, the policy learned in the source domain is applied directly on the GAN-translated target domain. The experimental evaluation uses two Atari games: i) transfer from Breakout to Breakout with static visual distractors inpainted on the screen, ii) from one Road Fighter level to others. Results suggest that this transfer learning approach requires less images than retraining from scratch in the new domain, including when fine-tuning does not work.\\n\\n\\n# Strengths\", \"controlled_toy_experiments_of_deep_rl_generalization_issues\": \"The experiments on Breakout quantify how badly A3C overfits in this case, as it shows catastrophic performance degradation even with trivial static visual input perturbations (which are not even adversarial attacks). The fine-tuning experiments also quantify well how brittle the initial policy is, motivating further the importance of the problem studied by the paper.\", \"investigating_the_impact_of_different_gans_on_the_end_task\": \"\", \"the_experiments_evaluate_two_different_image_translation_algorithms\": \"one based on UNIT, the other based on CycleGAN. The results suggest that this choice is key and depends on the target domain. This suggests that the adaptation is in fact task dependent, confirming the direction pursued by others in task-specific unsupervised domain adaptation (cf. below).\\n\\n\\n# Weaknesses\", \"discrepancy_between_quantitative_and_qualitative_results\": \"The good quantitative results (accumulated rewards) reported in the experiments are not reflected in the qualitative results. As can be seen from the videos, these results seem more to be representative of a bias in the data. For instance, in the Road Fighter videos, one can clearly see that the geometry of the road (width, curves) and dynamic obstacles are almost completely erased in the image translation process. The main reasons the quantitative results are good seem to be i) in the non-translated case the agent crashes immediately, ii) the \\\"translated\\\" image is a wide straight road identical to level 1 where the policy just keeps the car in the middle (thus crashing as soon as there is a turn or a collision with an obstacle). Even in the breakout case, there are catastrophic translation failures for some of the studied variations although the domain gap is static and small. The image translation results look underwhelming compared to state of the art GANs used for much more complex tasks and environments (e.g., the original CycleGAN paper and follow-up works, or the ICLR'18 progressive growing of GANs paper). This might be due to a hyper-parameter tuning issue, but it is unclear why the adaptation results seem not on par with previous results although the paper is in a visually simpler domain (Atari games).\", \"does_not_address_the_rl_generalization_issues\": \"Although it is the main goal of the paper, the method is fundamentally side-stepping the problem as it does not improve in any way the policy or the Deep RL algorithm (they are left untouched). It is mapping the target environment to the source one, without consideration for the end task besides tuning GAN hyper-parameters. If the initial policy is very brittle (as convincingly shown in section 2), then just mapping to the source domain does not improve the generalization capabilities of the Deep RL algorithm, or even improves transfer learning: it just enables the policy to be used in other contexts that can be reduced to the training one (which is independent of the learning algorithm, RL or otherwise). So it is unclear whether the main contribution is the one claimed. The contribution seems instead an experimental observation that it might be easier to reduce related domains to the training one instead of retraining a new (specialised and brittle) policy. Existing works have actually gone further, learning jointly the image translation and task network, including for very challenging problems, e.g. in unsupervised sim-to-real visual domain adaptation (e.g., Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks from Bousmalis et al at CVPR'17, which is not cited here).\", \"experimental_protocol\": \"The experimental conclusions are not clear and lack generality, because the optimal methods (e.g., choice of GAN, number of iterations) vary significantly depending on the task (cf. Table 3 for instance). Furthermore, the best configurations seem selected on the test set for every experiment.\", \"data_efficiency_vs_actual_training_efficiency\": \"The main claim is that it is better to do image translation instead of fine-tuning or full re-training. The basis of that argument is the experimentally observed need for less frames to do the image translation (Table 2). However, it is not clear that training GANs for unsupervised image translation is actually any easier / faster. What about training instability, mode collapse, hyper-parameter tuning, and actual training time comparisons on the same hardware?\\n\\n\\n\\n# First Recommendation\\n\\nUsing image translation via GANs for unsupervised domain adaptation is a popular idea, used in the context of RL for Atari games here. Although the experiments show that mapping a target visual domain to a source one can enable reusing a deep RL policy as is, the qualitative results suggest this is in fact due to a bias in the data used here and the experimental protocol does not yield general insights. Furthermore, this approach is not specific to RL and its observed generalization issues. It does not improve the learning of the policy or improve its transferability, thus having only limited new insights compared to existing approaches that jointly learn image translation and target task-specific networks in much more challenging conditions.\\n\\nI believe this submission is at the start of an interesting direction, and requires further work on more challenging tasks, bigger domain gaps, and towards more joint training or actual policy transfer to go beyond this first set of encouraging but preliminary results.\\n\\n\\n# Post-rebuttal Recommendation\\n\\nThanks to the authors for their detailed reply. The clarifications around overfitting, UNIT-GAN in Section 4, and the paper claims are helpful. I also agree that the quantitative experiments are serious. I have bumped my score by +1 as a result.\\n\\nNonetheless, the results still seem preliminary and limited in scope for the aforementioned reasons. The discussion in the comments about the learned policies and transfer are ad-hoc. A lot of the shortcomings mentioned in the review are outright dismissed (e.g., \\\"de facto standard in RL\\\"), downplayed (esp. generalization, which is puzzling for a transfer learning paper), or left for future work.\\n\\nAs there is no strong technical contribution beyond the experimental observations in the current submission, I suggest the authors try to address the GAN shortcomings both mentioned in reviews and their reply, instead of just observing / reporting them. As this paper's main focus is to use image translation in the proposed RL setting (with standard GAN and RL methods), I do not think it is just someone else's problem to improve the image translation part. Proposing a technical contribution there would make the paper much stronger and appealing to a broader ICLR audience. This might also require adding a third game to ensure more generalizable experimental insights.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bkg93jC5YX | BLISS in Non-Isometric Embedding Spaces | [
"Barun Patra",
"Joel Ruben Antony Moniz",
"Sarthak Garg",
"Matthew R Gormley",
"Graham Neubig"
] | Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and empirically show that this assumption weakens as the languages in question become increasingly etymologically distant. We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique. Our proposed method improves over strong baselines for 11 of 14 on the MUSE dataset, particularly for languages whose embedding spaces do not appear to be isometric. In addition, we also show that adding supervision stabilizes the learning procedure, and is effective even with minimal supervision. | [
"bilingual lexicon induction",
"semi-supervised methods",
"embeddings"
] | https://openreview.net/pdf?id=Bkg93jC5YX | https://openreview.net/forum?id=Bkg93jC5YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HyevvTAW14",
"ByxN1pkJkV",
"HJxnJu56A7",
"HJxo-KHKRQ",
"rJevx4SYR7",
"S1lrtXHt0X",
"HklcW7SKAm",
"SkeL2mEshQ",
"SkxWL2xK2m",
"SJeJN7oHhm"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543789918531,
1543597276274,
1543509988206,
1543227651407,
1543226351251,
1543226237217,
1543226114261,
1541256110340,
1541110856658,
1540891431304
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper738/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper738/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper738/Authors"
],
[
"ICLR.cc/2019/Conference/Paper738/Authors"
],
[
"ICLR.cc/2019/Conference/Paper738/Authors"
],
[
"ICLR.cc/2019/Conference/Paper738/Authors"
],
[
"ICLR.cc/2019/Conference/Paper738/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper738/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper738/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper is very close to the decision boundary and the reviewers were split about whether it should be accepted or not. The authors updated the paper with additional experiments as request by the reviewers.\\nThe area chair acknowledges that there is some novelty that leads to (moderate) empirical gains but does not see these as sufficient to push the paper over the very competitive acceptance threshold.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}",
"{\"title\": \"Response to question\", \"comment\": \"Thank you for your question.\\n\\nAlthough optimizing over the CSLS loss (BLISS(R)) improves over optimizing over the cosine loss (BLISS(M)), we show that both these instantiations of our proposed semi-supervised framework outperform their supervised and unsupervised counterparts. The SotA results of BLISS(R) thus come from the proposed semi-supervised framework in combination with RCSLS 's (Joulin et al. (2018)) supervised objective, as opposed to from purely the CSLS objective. \\n\\nThis is especially visible from Table 2, BLISS(R) substantially outperforms RCSLS on 6 of 9 language pairs, especially when the GH distance between the languages is high (2.4% for en-uk, 3.4% for en-sv, 0.9% for en-el, 0.8% for en-hi, 2.4% for en-ko). Tables 3 and 4 underscore this point, wherein the model performs at least at par with (and often better than) RCSLS on European languages, and performs significantly better on en-zh (2.8%) and zh-en (0.9%); while on the VecMap dataset, an improvement of 0.8% is observed for both languages.\\n\\nSimilarly, BLISS(M) outperforms its supervised counterpart MUSE(S). This can be seen from Tables 3 and 4, where BLISS(M) outperforms MUSE(S) in 10 of 12 language pairs.\\n\\nIn addition, as highlighted by Tables 5 and 7, the semi-supervised framework is able to function even when the model has low levels of supervision, a case where RCSLS does not perform well at all, achieving very low accuracies for 50 and 500 data-points.\\n\\nTo summarize, the semi-supervised framework improves over both its supervised and unsupervised counterparts, and does extremely well when the available training data is less.\"}",
"{\"comment\": \"I saw in the updated version, the performance of BLISS (R) matches the sota results. However, I was wondering if this gain mainly comes from the optimization towards CSLS metric in your supervised loss, which is the main contribution of Joulin et al. (2018)' method.\", \"title\": \"The performance gain of BLISS(R) mainly comes from the optimization towards CSLS metric?\"}",
"{\"title\": \"Common Response to all reviewers\", \"comment\": \"We thank the reviewers for their detailed feedback.\\n\\nThe general feedback received from all reviewers suggested adding baselines that we had originally missed and comparing our framework against them, as well as adding clarifying information about details we missed in our original submission. Based on the feedback received, we have updated our submission. Our changes can be summarized as follows: \\n\\n1. We added a baseline (S\\u00f8gaard et. al.) for measuring the degree of isometry between two embedding spaces, and compare its correlation with accuracies (as done in S\\u00f8gaard et. al.) against our proposed GH distance-based metric.\\n2. We included SoTA methods (Joulin et al. (2018), Jawanpuria et al. (2018), Artetxe et al. (2018)), as pointed out by the reviewers, and show that they fit in nicely into our proposed semi-supervised framework. We also update our tables to compare against these supervised methods (Table 3 and Table 4).\\n3. We also added the missing baseline accuracies for etymologically distant language pairs (Table 2).\\n4. In addition, we restructured and added clarifying information for improved clarity based on reviewer feedback.\\n\\nWe would again like to thank the reviewers for their positive and helpful insights.\\n\\nReferences\\nS\\u00f8gaard et al. (2018): On the Limitations of Unsupervised Bilingual Dictionary Induction.\\nJoulin et al. (2018): Loss in translation: Learning bilingual word mapping with a retrieval criterion.\\nJawanpuria et al. (2018): Learning multilingual word embeddings in latent metric space: a geometric approach.\\nArtetxe et al. (2018): A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your feedback and insightful comments. Below we try and address your comments individually:\\n\\n>> Beyond the isometry metric, the main innovation as far as I can see seems to be the hubness filtering, which is incremental and not ablated, so it is not clear how much improvement it yields. The weak orthogonality constraint has already been used in [s 2].\\n\\nIn addition to the Gromov-Hausdorff metric, our joint framework is a novel contribution. We show that it improves over both its corresponding supervised and unsupervised counterparts for two instantiations of our framework (BLISS(M) based on MUSE(S), and BLISS(R) based on RCSLS, incorporating reviewer feedback), which in turn illustrates its efficacy, with BLISS(R) obtaining state-of-the-art results (to the best of our knowledge).\\n\\n>> It is not clear to me what the proposed metric adds beyond the eigenvector similarity metric proposed in [1]. The authors should compare to this metric at least.\\\\\\n\\nThank you for pointing this reference out; we have updated our paper to include the metric from [1] (Table 2). From the Table, we observe that our method correlates better than the eigenvector similarity metric.\\n\\n>> The authors might want to add the results of [3] for an up-to-date comparison.\\n\\nWe have incorporated this baseline in our latest draft, along with an accompanying discussion (Section 4.2.4).\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your feedback and insightful comments. Below we try and address your queries individually:\\n\\n1) My understanding is that the proposed method (the one named BLISS in the experiments) only makes use of the proposed semi-supervised framework (Section 3.2) and is not followed by the iterative procrustes refinement (Section 3.3), but this is not clear at all from the paper. Could you clarify this?\\n\\nWe always perform iterative Procrustes refinement. However, using iterative Procrustes refinement does not always yield the best performing model (where we measure performance using the unsupervised CSLS metric, and report the corresponding accuracy). We find that for very low data points, the supervision is noisy, and consequently, iterative Procrustes helps improve performance. However, when more data is available, iterative Procrustes only helps for languages that have low GH distance (see Table 2). Finally, under an improved supervised loss function and with sufficient data, BLISS (R) does not require iterative Procrustes refinement (Table 3 and 4). We have added this description in Appendix Section 8.\\n\\n2) It is well known that the retrieval method can have a big impact in bilingual dictionary induction due to the hubness problem. However, the paper does not detail which retrieval method is used in the experiments. I assume that MUSE uses CSLS and Vecmap uses nearest neighbor over cosine. Is this correct? What retrieval method does BLISS use?\\n\\n\\nBLISS uses CSLS; sorry for the confusion. We have clarified this in Section 4.2.2.\\n\\n\\n3) I assume that when you talk about the \\\"CSLS metric\\\" in page 7 and 13 you refer to the unsupervised validation criterion of Lample et al. (Section 3.5 in their paper), and not to CSLS itself (Section 2.3 in their paper). In either case, this needs some clarification.\\n\\n\\nThank you for pointing this ambiguity out, we meant the unsupervised validation criterion while referring to the CSLS metric. We have clarified this in the latest draft.\\n\\n4) Unlike the \\\"train\\\" dictionaries, the \\\"full\\\" dictionaries from MUSE as provided at github also include the test set. Do you preprocess them to exclude the test set? If so, this should be clearly stated in the paper. If not, this would invalidate all these experiments.\\n\\n\\nBy \\u201call\\u201d, we meant using all the data available in the training split (0-5000). However, since a word can translate into multiple words in the target language, 0-5000 effectively contains more than 5000 pairs. By \\u201call\\u201d, we refer to using all the data points in the train split, whereas 5000 refers to using just the first 5000 pairs. In order to reduce ambiguity, we have removed the 5000 row from Table 3 in the latest draft. The performance with 5000 data points can be found in Table 5 (Table 6 in the original draft), where we show this information for a few language pairs due to space constraints, and in Appendix Section 8 for all language pairs.\\n\\n5) The authors use different language pairs in their different result tables, which I find very confusing. For instance, none of the language pairs in table 5 (except for en-ru), are included in the main results (table 3), so we do not know how the different baselines and variants perform in them. Is there any reason for that?\\n6) Could you include all MUSE variants in Table 4?\\n\\nIn the original draft, we had done this to accommodate for space constraints. However, based on reviewers\\u2019 feedback, we have added baseline numbers for both distant language pairs as well as on the VecMap dataset in the current version (Tables 2 and 4 respectively).\\n\\n7) While you compare your method to different versions of Vecmap (Artetxe et al., ACL 2017 & AAAI 2018), the last one (Artetxe et al., ACL 2018) (http://aclweb.org/anthology/P18-1073) is missing. That paper reports 48.1% and 48.2% accuracy for en-it and en-de in the unsupervised case, which is substantially better than your results for en-it (45.9%) and at par for en-de (48.3%). This goes against the main motivation of the paper (i.e. unsupervised distribution information and supervision from dictionaries can be combined for best results), as a completely unsupervised method seems to perform better than (or at least at par with) the proposed semi-supervised method. I think that the paper should include some discussion on this. In particular, I would like to know whether you have any argument to believe that both works are complementary.\\n\\nArtexe et al. (2018) translate in a common embedding space, while our method, similar to RCSLS, translate in the target embedding space. It was shown in Kementchedjhieva et al. (2018) that translating in a common embedding space leads to performance gains, which we believe to be the case here. We have added the numbers of Artexe et al. (2018) as well as GeoMM (a supervised method which translates in the common embedding space) in Table 4, and also included a discussion stating the same (Section 4.2.4).\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your feedback and comments. Below, we try and address your comments individually:\\n\\n>> From modeling perspective, the utility of weak orthogonality constraint in the objective function is unclear. Does it improve generalization performance? Is it for preserving monolingual performance? The cited works (in the above summary) show that removing the strong/weak orthogonality constraint improves the BLI accuracy while preserving the monolingual performance.\\n\\nOur weak orthogonality constraint stabilizes training and reduces reliance on hyperparameters. In particular, as described in Section 7.3 and Table 6 in the appendix, we observe that the performance of MUSE is very sensitive to the choice of the hyperparameter Beta used in their orthogonal projection step . On the other hand, the data-driven weak orthogonality constraint is parameter independant and is more robust than its data-independant counterpart. Hence it is invariant of the language pair, thereby generalizing better.\\n\\nCompared to this framework, Jawanpuria et al. (2018) learn orthogonal mapping into a common embedding space. This framework of learning orthogonal mapping was also adopted by S\\u00f8gaard et al. (2018) who do iterative refinement over a small seed dictionary containing identical seed words (similar in spirit to our MUSE(R) baseline).\\n\\n In the initial work of Joulin et al. (2018), they apply a data independent orthogonality constraint, and constrain the weight matrices to have eigenvalues <= 1 (spectral norming), or by constraining the matrices to be unit Frobenius norm. In their final published work, they remove these constraints altogether, and show improvements in performance. However, this final work was concurrent, being published less than a month prior to the ICLR deadline.\\n\\n>> The baselines chosen for experiments are not state-of-the-art. In addition, Artetxe et al. (2017, 2018) results are with NN/ISF retrieval procedure. These baselines should be rerun with CSLS retrieval procedure (codes are available in the author's website), which is now a standard for BLI task. Refer to Artetxe et al (2018b), Joulin et al (2018), Jawanpuria et al (2018), Gravel et al (2018) for state-of-the-art (semi-supervised/ unsupervised) results on MUSE and Vecmap datasets.\\n\\nWe apologise for missing these baselines. Our submission posited a semi-supervised framework, and we had compared against the corresponding supervised and unsupervised counterparts. We have updated our paper to include more sophisticated supervised baselines (namely, Joulin et al. (2018) and Jawanpuria et al. (2018)). For fair comparison, we include an instantiation of our semi-supervised framework with the supervised CSLS loss of Joulin et al. (2018), and show that this still outperforms its supervised and unsupervised counterparts.\\n\\n>> Experiments with varying data (Table 6) does not provide a clear picture without discussing unsupervised/semi-supervised baselines.\\n\\nWe apologize for being brief in the description of Table 6 (Table 5 in the updated version). Note that BLISS consistently outperforms its unsupervised counterpart MUSE(U), even with minimal amounts of data, as well as MUSE(R), which is a strong semi-supervised method, utilizing dictionary expansion via iterative Procrustes refinement. Based on your feedback, we also incorporate a comparison with RCSLS (which uses CSLS, and is a semi-supervised method). We added clarifying details explaining this in Section 4.2.1, as well as a detailed comparison of the performances of the different unsupervised and semi-supervised methods across language pairs in Section 8 in the Appendix\\n\\n>> The logic behind experiments on GH distance (Table 2) is unclear. Why should a high correlation with *a baseline* suggest that GH distance correlates well with the degree of isometry of the two languages? Does GH distance has high correlation with *any* baseline for BLI?\\n\\nIn order to compute the orthogonality of spaces empirically, we wanted to avoid relying on matched lexicons, and consequently chose an unsupervised method (MUSE(U)). In addition, incorporating comments from Reviewer 3, we also show that our method correlates with accuracies better than S\\u00f8gaard\\u2019s method across several baseline techniques for BLI.\\n\\nReferences\\nJawanpuria et al. (2018): Learning multilingual word embeddings in latent metric space: a geometric approach.\\nS\\u00f8gaard et al. (2018): On the Limitations of Unsupervised Bilingual Dictionary Induction.\\nJoulin et al. (2018): Loss in translation: Learning bilingual word mapping with a retrieval criterion.\"}",
"{\"title\": \"a semi-supervised algorithm for bilingual lexicon induction problem\", \"review\": \"Summary:\\n\\nThe paper propose a semi-supervised algorithm for bilingual lexicon induction (BLI) problem. Prior works on BLI problem usually impose orthogonality constraint on the linear transformation in order to obtain a \\\"reversible\\\" mapping and to preserve the monolingual performance. However, from both modeling and generalization perspective, recent works do not impose this constraint while learning the mapping (Doval et al 2018, Jawanpuria et al 2018, Joulin et al 2018, Sogaard et al 2018, among others). The present work argues for the removal of the orthogonality constraint when language spaces are non-isometric, and proposes to employ the Gromov Hausdroff (GH) distance to validate this condition. Overall, the paper employs an objective function which is the sum of the (unsupervised) adversarial distribution matching objective (Lample et al 2018b), (supervised) the BLI loss function (typically the square loss), and a consistency loss (Hoshen and Wolf 2018). Empirically, the proposed method shows better results than unsupervised method of Lample et al (2018b) and the Procrustes solution.\", \"the_pros\": [\"Existing works have shown that some BLI techniques perform better than the other in *some* pair of languages. Hence, it seems that there may not be \\\"one size fit all\\\" BLI technique. The proposed usage of GH distance is in the direction to quantitatively categorize pairs of languages. Based on a carefully crafted metric, the practical systems may chose to use one BLI algorithm over another for a given pair of languages.\"], \"the_cons\": \"- From modeling perspective, the utility of weak orthogonality constraint in the objective function is unclear. Does it improve generalization performance? Is it for preserving monolingual performance? The cited works (in the above summary) show that removing the strong/weak orthogonality constraint improves the BLI accuracy while preserving the monolingual performance.\\n- The baselines chosen for experiments are not state-of-the-art. In addition, Artetxe et al. (2017, 2018) results are with NN/ISF retrieval procedure. These baselines should be rerun with CSLS retrieval procedure (codes are available in the author's website), which is now a standard for BLI task. Refer to Artetxe et al (2018b), Joulin et al (2018), Jawanpuria et al (2018), Gravel et al (2018) for state-of-the-art (semi-supervised/ unsupervised) results on MUSE and Vecmap datasets.\\n- Experiments with varying data (Table 6) does not provide a clear picture without discussing unsupervised/semi-supervised baselines.\\n- The logic behind experiments on GH distance (Table 2) is unclear. Why should a high correlation with *a baseline* suggest that GH distance correlates well with the degree of isometry of the two languages? Does GH distance has high correlation with *any* baseline for BLI?\\n\\n\\nArtetxe et al (2018b): A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings.\\nJoulin et al (2018): Loss in translation: Learning bilingual word mapping with a retrieval criterion.\\nJawanpuria et al (2018): Learning multilingual word embeddings in latent metric space: a geometric approach.\\nHoshen and Wolf (2018): Non-adversarial unsupervised word translation.\\nDoval et al (2018): Improving cross-lingual word embeddings by meeting in the middle.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Solid work with some obscure parts that should be clarified\", \"review\": \"This paper presents a new semi-supervised method to learn cross-lingual word embeddings mappings combining unsupervised distribution matching, alignment over a training dictionary, and a weak orthogonality constraint. The paper also shows that the underlying isometry assumption in orthogonal mappings weakens as the languages involved are more distant, proposes a new method to quantify the strength of the said assumption, and argues that the proposed semi-supervised mapping method is particularly suited for these more challenging cases.\\n\\nI think that this is a solid work that explores an interesting research direction within cross-lingual embedding mappings. While the basic ingredients of the proposed method are not new, their combination is certainly original. In that regard, I think that the paper is rather incremental, but still has enough substance to make an interesting contribution. However, I think that some parts of the paper are too obscure, and I am not fully convinced by the experiments. I would appreciate if the authors could address my concerns below, and I would be happy to modify my score accordingly:\\n\\n1) My understanding is that the proposed method (the one named BLISS in the experiments) only makes use of the proposed semi-supervised framework (Section 3.2) and is not followed by the iterative procrustes refinement (Section 3.3), but this is not clear at all from the paper. Could you clarify this?\\n\\n2) It is well known that the retrieval method can have a big impact in bilingual dictionary induction due to the hubness problem. However, the paper does not detail which retrieval method is used in the experiments. I assume that MUSE uses CSLS and Vecmap uses nearest neighbor over cosine. Is this correct? What retrieval method does BLISS use?\\n\\n3) I assume that when you talk about the \\\"CSLS metric\\\" in page 7 and 13 you refer to the unsupervised validation criterion of Lample et al. (Section 3.5 in their paper), and not to CSLS itself (Section 2.3 in their paper). In either case, this needs some clarification.\\n\\n4) Unlike the \\\"train\\\" dictionaries, the \\\"full\\\" dictionaries from MUSE as provided at github also include the test set. Do you preprocess them to exclude the test set? If so, this should be clearly stated in the paper. If not, this would invalidate all these experiments.\\n\\n5) The authors use different language pairs in their different result tables, which I find very confusing. For instance, none of the language pairs in table 5 (except for en-ru), are included in the main results (table 3), so we do not know how the different baselines and variants perform in them. Is there any reason for that?\\n\\n6) Could you include all MUSE variants in Table 4?\\n\\n7) While you compare your method to different versions of Vecmap (Artetxe et al., ACL 2017 & AAAI 2018), the last one (Artetxe et al., ACL 2018) (http://aclweb.org/anthology/P18-1073) is missing. That paper reports 48.1% and 48.2% accuracy for en-it and en-de in the unsupervised case, which is substantially better than your results for en-it (45.9%) and at par for en-de (48.3%). This goes against the main motivation of the paper (i.e. unsupervised distribution information and supervision from dictionaries can be combined for best results), as a completely unsupervised method seems to perform better than (or at least at par with) the proposed semi-supervised method. I think that the paper should include some discussion on this. In particular, I would like to know whether you have any argument to believe that both works are complementary.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents a new semi-supervised method for bilingual dictionary induction and proposes a new metric to measure isometry between embedding spaces.\", \"pros\": [\"The paper proposes to use a new metric, the Gromov-Hausdorff distance to measure how isometric two word embedding spaces are.\", \"The toy example is useful for motivating the use case of the method.\", \"The approach achieves convincing results on the dataset.\"], \"cons\": \"- Beyond the isometry metric, the main innovation as far as I can see seems to be the hubness filtering, which is incremental and not ablated, so it is not clear how much improvement it yields. The weak orthogonality constraint has already been used in [2].\\n- It is not clear to me what the proposed metric adds beyond the eigenvector similarity metric proposed in [1]. The authors should compare to this metric at least.\\n- The authors might want to add the results of [3] for an up-to-date comparison.\\n\\n[1] S\\u00f8gaard, A., Ruder, S., & Vuli\\u0107, I. (2018). On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018.\\n[2] Zhang, M., Liu, Y., Luan, H., & Sun, M. (2017). Adversarial Training for Unsupervised Bilingual Lexicon Induction. In Proceedings of ACL.\\n[3] Artetxe, M., Labaka, G., & Agirre, E. (2018). A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of ACL 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1fcnoR9K7 | Learning with Random Learning Rates. | [
"Léonard Blier",
"Pierre Wolinski",
"Yann Ollivier"
] | Hyperparameter tuning is a bothersome step in the training of deep learning mod- els. One of the most sensitive hyperparameters is the learning rate of the gradient descent. We present the All Learning Rates At Once (Alrao) optimization method for neural networks: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude. This comes at practically no computational cost. Perhaps surprisingly, stochastic gra- dient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various architectures and problems. Alrao could save time when testing deep learning models: a range of models could be quickly assessed with Alrao, and the most promising models could then be trained more extensively. This text comes with a PyTorch implementation of the method, which can be plugged on an existing PyTorch model. | [
"step size",
"stochastic gradient descent",
"hyperparameter tuning"
] | https://openreview.net/pdf?id=S1fcnoR9K7 | https://openreview.net/forum?id=S1fcnoR9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Byl2Zatlx4",
"SJe7y-kmyE",
"r1lrajxM07",
"SJgycsgMAQ",
"rklxDjxz07",
"HyleC5eMR7",
"Hyly32Fqhm",
"HyeBGxI52Q",
"rkg4HYxL27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752387892,
1543856347491,
1542749117398,
1542749062585,
1542749016502,
1542748871802,
1541213350845,
1541197837331,
1540913468102
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper737/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper737/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper737/Authors"
],
[
"ICLR.cc/2019/Conference/Paper737/Authors"
],
[
"ICLR.cc/2019/Conference/Paper737/Authors"
],
[
"ICLR.cc/2019/Conference/Paper737/Authors"
],
[
"ICLR.cc/2019/Conference/Paper737/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper737/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper737/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a new optimization approach for neural nets where, instead\\nof a fixed learning rate (often hard to tune), there is one learning rate per\\nunit, randomly sampled from a distribution. Reviewers think the idea is\\nnovel, original and simple. Overall, reviewers found the experiments\\nunconvincing enough in practice. I found the paper really borderline,\\nand decided to side with the reviewers in rejecting the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}",
"{\"title\": \"update to review score\", \"comment\": \"The authors have put in an effort to answer the earlier questions, and acknowledged the aspects which cannot be directly addressed at this point. The draft/appendix has been updated to include new material and technical results. While the work and ideas are still in somewhat early stages, it will be good to see the ideas (of using random learning rates) get discussed more widely -- so I am updating my score to be above the threshold.\"}",
"{\"title\": \"Specific answers to your comments\", \"comment\": \"Thank you for your comments and for insisting on more substantial\\nexperimental validation. This comment spurred us to perform a series of\\nexperiments on ImageNet with several architectures, all confirming our\\nprevious observations (see general comment above).\\n\\nWe hope these additional experiments may mitigate your opinion that \\\"related\\nto the experimental evaluation of the method. I find the experimental\\nevidence for the effectiveness of Alrao insufficient.\\\"\\n\\nIndeed the text offers no experimental use case related to architecture\\nsearch. As a result, following your comment we have decided to\\nde-emphasize architecture search as a motivation in the text. Instead, we\\nprovide more experiments to insist on the robustness of the method and\\nits ability to provide results in a single run.\", \"about_alrao_versus_setting_per_weight_random_learning_rates\": \"our design\\nis based on a theoretical argument. Indeed, with per-weight learning\\nrates, *every* unit would have some incoming weights with large learning\\nrate. So, every unit would be at risk of divergence or quick saturation\\n(eg for sigmoid activations). With per-unit learning rates, units with\\ntoo-large learning rates may saturate and \\\"die\\\", but units with suitable\\nlearning rates will be able to propagate information to subsequent\\nlayers. Hopefully a sub-network made of units with suitable learning\\nrates will emerge.\", \"about_additional_time_in_practice\": \"the per-iteration or per-epoch\\ncomputational overhead of Alrao (a few seconds) is negligible compared to\\nthe computation time of each epoch (approximately an hour) in our Imagenet experiments.\\nThe number of epochs for convergence may be slightly worse than SGD\\nbut varies depending on the setup, see Figs 2 and 5.\"}",
"{\"title\": \"Specific answers to your comments\", \"comment\": \"Thank you for your compliments on the importance, originality and clarity\\nof the paper, and for pointing out ways to improve the paper (eg\\ntheoretical analysis).\", \"about_possible_gains_from_the_method\": \"It is true that \\\"It is becoming more\\nand more easy to tune the learning rate of deep learning models\\\".\\nHowever, the early stopping method still assumes we can launch several\", \"copies_of_the_model\": \"this is not the case in an online setting, for\\ninstance. Future applications may require on-the-spot fine-tuning or\\ntransfer, for instance to adapt to a specific end-user's characteristics,\\nwithout off-line retraining.\\n\\nThe added ImageNet experiments (see general comment above) provide more\\narguments for the benefits of the method, notably its robustness compared\\nto Adam.\\n\\nThank you for pointing out Orabona-Tommasi 2017. This is a very\\nintriguing paper, but it seems to us that doing away with gradients\\naltogether is a bold step which requires extensive testing,\\nwhile Alrao stays closer to the well-tested SGD principle.\\nOrabona-Tommasi's method has still to be tested for standard\\narchitectures (VGG, GoogleNet, etc.). We have added this reference in the\\nrelated work.\\n\\nAbout Alrao needing \\\"a lot of work\\\": We have provided a generic Alrao\\nimplementation, so the additional work to incorporate Alrao into an\\nexisting model is just a call to that library (for standard\\narchitectures).\", \"about_the_cost_of_the_last_layer_for_nlp\": \"this is true. However, this\\nproblem is already present in many NLP applications even without Alrao,\\nand over the years a number of strategies have been developed to reduce\\nsubstantially the last layer's parameter burden for large vocabulary\\nsizes (see several references in Jozefowicz 2016).\", \"about_a_mathematical_analysis_on_a_simple_problem\": \"see the new Theorem\\n1 in Appendix B. Namely, for logistic regression (or any convex\\nfunction with fixed pre-classifier), with eta_min small enough, then\\nAlrao eventually reaches small loss. Going beyond that simplified\\nsituation would already require a general analysis of the precise\\nSGD dynamics of neural networks with a single learning rate.\\n\\nAbout the impact of the learning rate range, and using [10^-5:10] for\\nCIFAR versus [10^-3:10^2] for PennTreeBank: Fig 3 clearly shows that\\nusing [10^-3;10^2] for CIFAR provides very similar results to [10^-5:10].\\nSo we could have used the same interval. Our general prescription is to\\nuse for Alrao the range of learning rates that would have classically\\nbeen tested via grid search. Generally, recurrent networks tend to\", \"require_larger_learning_rates_than_convolutional_networks\": \"we do not\\nforbid the use of such expert knowledge in Alrao, and this is why we\\ninitially tested [10^-3:10^2] for LSTMs. These intervals are just the\\nfirst that came to mind, and we did not tweak them to get better results.\\n(Partly, the difference is just because these experiments were run by two\\ndifferent coauthors).\", \"why_use_the_same_learning_rate_for_all_parts_of_an_lstm_unit\": \"The\\nintuition is similar to fixing the learning rate per unit rather than per\\nweight. We would rather have a mixture of fully-functioning units and\\nnon-functioning units, than the presence of too-large weights inside\\nevery unit, which may screw up all units.\", \"other_minor_comments\": \"thank you for pointing out these mistakes, they have\\nbeen fixed.\"}",
"{\"title\": \"Specific answers to your comments\", \"comment\": \"Thank you for your balanced review with pros and cons. Let us answer each\\ncon and detailed comment in turn.\\n\\n#Cons:\", \"no_theoretical_argument_for_convergence\": \"The newly included theorem in Appendix B\\nstates that for a simple cases such as logistic regression, with eta_min\\nsmall enough, then Alrao eventually reaches the optimal loss. (Going\\nbeyond that simplified situation would already require a general analysis\\nof the precise dynamics of neural networks with a single SGD learning rate,\\nwhich is an open problem.)\", \"beating_adam_only_once\": \"We have introduced experiments on ImageNet with\\nthree models. They show that Adam's behavior is less consistent than\\nAlrao (including a model where Adam does not even start to learn). Thus,\\nthe Alrao/Adam comparison on CIFAR is not an isolated occurrence.\", \"alrao_working_only_with_sgd\": \"We suspect a bad interaction between Alrao's\\nlearning rate mechanism, and the adaptive learning rates used in\\noptimizers fancier than SGD. We acknowledge this limitation; we have\\nedited the text to introduce Alrao on SGD by default, not as a generic\\nidea (though the tests with Alrao+Adam have been kept).\\n\\n#Detailed comments:\\n\\n(1) Efficiency compared to Adam: Please see the general comment above and\\nthe new ImageNet experiments.\\n\\n(2) Influence of neurons with \\\"bad\\\" learning rates, too small or too\\nlarge. 1/ Too small learning rates: they will result in neurons not\\nchanging much from their initialization. Such neurons effectively behave\\nlike fixed random features. Our results in Appendix H show that including random\\nfeatures does not hurt and indeed can improve training. 2/ Too large\", \"learning_rates\": \"indeed we expect Alrao to work only if some mechanism\\nprevents bad neurons from having an ubbounded influence on subsequent\\nlayers. For instance, with sigmoid activation functions, the worse that\\ncan happen is that a neuron gets stuck to activities 0 or 1 that are\\ndecorrelated from the desired output signal. In that case, the subsequent\\nlayers have to learn to just ignore these 0/1 activities, but the\\nnetwork will not diverge. With ReLU and BatchNorm, activations keep a\\nbounded variance thanks to the BatchNorm, and the worse that can happen\\nis that activities become N(0,1) variables decorrelated from the output.\\nThe effect would be comparable to noisy neurons, which can be ignored by\\nsubsequent layers.\\n\\n(3) Details of architectures, experimental setup, data preprocessing: now\\nadded in Appendix D.\\n\\n(4) Why a different mechanism on the last layer. On the last layer, each\\nneuron corresponds directly to a class/label. If assigning one rate per class,\\nan unlucky class could get a very small learning rate and never learn anything\\nat all, or a large learning rate and immediately diverge. (For inner layers,\\nthis is not a problem as a layer contains many neurons and the subsequent layers\\ncan learn to listen to those neurons which have a good learning rate.)\\nTo give another example, imagine that we are working on a regression problem,\", \"with_a_unique_output_neuron\": \"then, choosing a fixed learning rate at random for\\nthis unique output neuron is clearly a bad idea. Our current method keeps\\nseveral copies of the output, with several learning rates.\\n\\n(5) (a) and (b): fixed.\\n\\n(5) (c) Fig 2a plots a single run, Fig 2b plots an average of 3 runs. The\\nresults are very consistent between runs, as shown by the standard deviations in Table 1.\\nThe curves for VGG19 do look similar; we can add them in an appendix if you deem it necessary.\\n\\n(5) (d) Color code in Fig 3: After deliberation we have decided to keep a\\nper-figure color code, which allows for better comparison of relative\\nperformance inside each figure. (The scales for train error and for test\\nerror can be quite different.)\"}",
"{\"title\": \"Main changes in the revision: new experiments, and a theoretical analysis in a simple case\", \"comment\": \"We would like to thank the reviewers for their insightful questions and remarks, which spurred us towards\\nincluding more substantial experimental support.\\n\\nAdditionally, a major concern of the reviewers seems to be that Alrao\\nwould not be that useful in practice. Let us point out that in all our\\nexperiments with Alrao-SGD, not a single run failed to learn. For us,\\nAlrao's tuning-free robustness and its ability to provide results in a\\nsingle run are major arguments in its favor.\", \"the_new_version_uploaded_today_adds_to_the_text\": \"- a series of experiments on ImageNet\\n- a theoretical analysis of Alrao in a simple case (convex function, fixed preclassifier) \\n- answers to the various remarks (detailed below)\\n\\nThe new Imagenet experiments involve three standard models (AlexNet,\\nResNet50, DenseNet121). They largely confirm the robustness of Alrao,\\nwith results comparable to best-adjusted SGD (and better in one\\ninstance). [The table contains \\\"?\\\" and best bounds so far where some\\nexperiments are still running. They will be completed in a few days.]\\n\\nOn the same models, default Adam has mitigated results: on one model it\\ndoes not learn anything at all, and on the other two it first reaches\\nvery good performance then diverges shortly thereafter.\\n\\nWe answer below to the more specific comments of each review.\"}",
"{\"title\": \"Interesting idea, does not seem to work consistently, limited theoretical explanation\", \"review\": \"This work proposes an optimization method called All Learning Rate\\nAt Once (Alrao) for hyper-parameter tuning in neural networks.\\nInstead of using a fixed learning rate, Alrao assigns the learning\\nrate for each neuron by randomly sampling from a log-uniform\\ndistribution while training neural networks. The neurons with\\nproper learning rate will be well trained, which makes the whole\\nnetwork eventually converge. The proposed method achieves\\nperformance close to the SGD with well-tuned learning rate on the\\nexperiments of image classification and text prediction.\\n\\n\\n#Pros:\\n\\n-- The use of randomly sampled learning rate for deep learning\\nmodels is novel and easy to implement. It can become a good\\napproximation of using SGD with the optimal learning rate.\\n\\n-- The paper is well-written and easy to follow. The proposed\\nmethod is illustrated in a clear way.\\n\\n-- The experiments are solid, and the performance on three\\ndifferent architectures are shown for comparison. According to the\\nexperiments, the proposed method is not sensitive to the\\nhyper-parameter \\\\eta_{min} and \\\\eta_{max}.\\n\\n#Cons:\\n\\n-- The authors have not given any theoretical convergence analysis\\non the proposed method.\\n\\n-- Out of all four experiments, the proposed method only\\noutperforms Adam once, which does not look like strong support.\\n\\n-- Alrao achieves good performance with SGD, but not with Adam.\\nAlso, there are no experimental results on Alrao with other\\noptimization methods.\\n\\n#Detailed comments:\\n\\n(1) I understand that Alrao will be more efficient compared to\\napplying SGD with different learning rate, but will it be more\\nefficient compared to Adam? No clear clarification or experimental\\nresults have been shown in the paper.\\n\\n(2) The units with proper learning rate could learn well and\\nconstruct good subnetworks. I am wondering if the units with \\\"bad\\\"\\n(too small or too large) learning rate might give a bad influence\\non the convergence or performance of the whole network.\\n\\n(3) The experimental setting is not clear, such as, how the input\\nnormalized, how data augmentation is used in the training phase,\\nand what are the depth, width and other settings for all three\\narchitectures.\\n\\n(4) The explanation on the influence of using random learning rate\\nin the final layer is not clear to me.\\n\\n(5) Several small comments regarding writing:\\n (a) Is the final classifier layer denoted as $C_{\\\\theta^c}$ or $C_{\\\\theta^{cl}}$ in the third paragraph of \\\"Definitions and notations\\\"?\\n (b) In algorithm 1, what is the stop criteria for the do while? The \\\"Convergence ?\\\" in the while condition is confusing.\\n (c) Is the learning curve in Figure 2 from one run or is it the average of all runs? Are the results consistent for each run? How about the learning curves for VGG19 and LSTM, do they have similar learning curves with the two architectures in Figure 2?\\n (d) For Figure 3, it will be easier to compare the performance on the training and test set, if the color bars for the two figures share the same range.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"not convinced it's worth it\", \"review\": \"POST REBUTTAL: I think the paper is decent, there are some significant downsides to the method but it could constitute a first step towards a more mature learning-rate-free method. However, in its current state the paper is left with some gaping holes in its experiment section. The authors tried to add experiments on Imagenet, but these experiments apparently didn't finish before the end of the rebuttal period. For that reason, the paper probably should not be accepted for publication (even if the authors manage to finish running these experiments, we would not have a chance to review these results).\\n\\n------\\nOriginal review\\n------\\n\\nIn this paper, the authors present a method for training deep networks with randomly sampled feature-wise learning rates, removing the need for fixed learning rates and their tuning. The method is shown to perform comparatively to SGD with a learning rate roughly optimized with regards to validation performance. The method applies to the most popular types of deep learning architectures, which includes fully connected layers, convolutional layers and recurrent cells.\", \"quality\": \"The paper is of a decent quality in general, I noticed no glaring omissions while reading the paper. However, I do worry that the method provides little gain for a lot of work. It is becoming more and more easy to tune the learning rate of deep learning models with strategies such as early stopping, and this method comes at a high cost for models with a big final layer.\", \"clarity\": \"The paper is well written, but the reader is often (too often?) sent to the Appendix, which is itself ordered in a strange way (e.g., the first reference to the Appendix in the paper refers to Appendix F?). If some sections of the Appendix are not needed, I would remove them.\", \"originality\": \"The work is original in the approach, i.e. randomization as a way to get rid of learning rates is a novel method. However, there was one work presented last year at NIPS which concerns itself with the same problem, which is getting rid of learning rates:\\n\\n\\u201cTraining Deep Networks without Learning Rates Through Coin Betting\\u201d by Francesco Orabona and Tatiana Tommasi, NIPS, 2017.\\n\\nThey don\\u2019t compare on the same methods and the same datasets, but I think the authors should be aware of this work and perhaps compare themselves with it. The work takes a very different approach to solve the problem so I don\\u2019t think it\\u2019s an issue for this paper.\", \"significance\": \"I think the work is important, in that it adds another tool to solve the learning rate problem. I would not say it is likely to have a very high impact, because it involves a lot of work, for little benefit. Furthermore, the cost of reproducing multiple times the last layer of the network will be prohibitive in many cases for NLP.\\n\\nThe method feels ad-hoc in many respects, and there are no guarantees that it would work any better than Adam does on pathological cases. Perhaps some mathematical analysis on simpler problems would help make the contributions stronger. \\n\\nThe authors state that the learning rate range has little impact on performance, yet it still has enough impact to justify tuning it for different models and datasets (on CIFAR it is 10^-5 to 10^1, on Pennbank it is 10^-3 to 10^2). I would tend to agree that the alrao method is more robust to the choice of learning rate than plain SGD, however the fact of the matter is that there are still parameters to tune. \\n\\nFigure 5. also seems to suggest that the range is important, although the models were not trained until the end, so it is not clear.\", \"some_additional_comments\": \"\", \"nitpicking\": \"In Section 2, most sub-sections (or paragraphs titles?) have the name of the method in them. That\\u2019s redundant. Instead of \\u201cAlrao principle\\u201d, \\u201cAlrao update\\u201d, etc., just write \\u201cPrinciple.\\u201d, \\u201cUpdate.\\u201d.\\n\\nIs there a justification for using the same learning rate for all weights in an LSTM unit? \\n\\nI believe there is a mistake in Equation 2. The denominator should be log(\\\\eta_{max}) - log(\\\\eta_{min})\\n\\n[second paragraph on page 4.] Once again nitpicking for the sake of clarity: \\u201cFor each classifier C\\u03b8 cl j, we set a learning rate log eta_j = \\u2026\\u201d this reads as if the learning rate would be set to log eta_j, but you probably mean you will set the learning rate to eta_j = exp(...).\\n\\nFigure 5b in the appendix does not specify which curve has which learning rate interval.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting method, but more experimental results are needed\", \"review\": \"In this paper the authors propose a method called \\u201cAll learning rates at once\\u201d (Alrao) which aims to save the time needed to tune learning rate for DNN models testing. The method sets individual learning rate to each feature in each layer of a network using the values sampled from truncated log-uniform distribution. The only cost of the method is the creation of several branches of the classifier layer. Each of the branches is trained with a predefined learning rate value, and the final predictions are obtained by model averaging. In the presented experiments Alrao demonstrates performance comparable to SGD with optimal learning rate and more stable results compared to Adam. The authors indicate limitations of Alrao caused by the overhead in the final layer which complicates the application of the method for models with large classifier layer.\\n\\nOverall, the paper is written clearly and organized well. However, Equation (2) needs to be corrected. The denominator in the normalizing constant of log-uniform distribution should be \\\\log\\\\eta_{max} - \\\\log\\\\eta_{min}.\\n\\nMy main concern is related to the experimental evaluation of the method. I find the experimental evidence for the effectiveness of Alrao insufficient. As the authors propose to employ the method to quickly evaluate models and select best models to further training it would be beneficial to have more results in order to ensure that the method is reliable in this setting. Other demonstrations which would show possibly that the method enhances performance of architecture search methods may emphasize significance of the proposed method. Also, more experiments comparing Alrao against sampling learning rates per weight are needed. Given the current results, it is still unclear whether the proposed method performs better. Finally, I recommend to include comments explaining how much more time is needed in practice to train model with Alrao compared to SGD training.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Byx93sC9tm | Deep Ensemble Bayesian Active Learning : Adressing the Mode Collapse issue in Monte Carlo dropout via Ensembles | [
"Remus Pop",
"Patric Fulop"
] | In image classification tasks, the ability of deep convolutional neural networks (CNNs) to deal with complex image data has proved to be unrivalled. Deep CNNs, however, require large amounts of labeled training data to reach their full potential. In specialised domains such as healthcare, labeled data can be difficult and expensive to obtain. One way to alleviate this problem is to rely on active learning, a learning technique that aims to reduce the amount of labelled data needed for a specific task while still delivering satisfactory performance.
We propose a new active learning strategy designed
for deep neural networks. This method improves upon the current state-of-the-art deep Bayesian active learning method, which suffers from the mode collapse problem. We correct for this deficiency by making use of the expressive power and statistical properties of model ensembles. Our proposed method manages to capture superior data uncertainty, which translates into improved classification performance. We demonstrate empirically that our ensemble method yields faster convergence of CNNs trained on the MNIST and CIFAR-10
datasets. | [
"Active Learning",
"Deep Learning",
"Bayesian Neural Networks",
"Bayesian Deep Learning",
"Ensembles"
] | https://openreview.net/pdf?id=Byx93sC9tm | https://openreview.net/forum?id=Byx93sC9tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkeS6nQglV",
"ByxPYoUPy4",
"rke6sOVz1V",
"HkgHpnJ9Cm",
"SkesRmzXR7",
"H1eWGlaxRX",
"BygRYc8eA7",
"B1xXG3110Q",
"rylwcjykCX",
"rygdEjy1CX",
"BJgzh7Schm",
"B1g0bJk5h7",
"rJgwCU_82Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544727740976,
1544149887138,
1543813284633,
1543269565307,
1542820819347,
1542668297287,
1542642309954,
1542548491450,
1542548367445,
1542548272479,
1541194666091,
1541168901665,
1540945614793
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper735/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper735/Authors"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper735/Authors"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper735/Authors"
],
[
"ICLR.cc/2019/Conference/Paper735/Authors"
],
[
"ICLR.cc/2019/Conference/Paper735/Authors"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper735/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers in general found the paper approachable, well written and clear. They noted that the empirical observation of mode collapse in active learning was an interesting insight. However, all the reviewers had concerns with novelty, particularly in light of Lakshminarayanan et al. who also train ensembles to get a measure of uncertainty. An interesting addition to the paper might be some theoretical insight about what the model corresponds to when one ensembles multiple models from MC Dropout. One reviewer noted that it's not clear that the ensemble is capturing the desired posterior.\\n\\nAs a note, I don't believe there is agreement in the community that MC dropout is state-of-the-art in terms of capturing uncertainty for deep neural networks, as argued in the author response (and the abstract). To the contrary, I believe a variety of papers have improved over the results from that work (e.g. see experiments in Multiplicative Normalizing Flows from over a year ago).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A well written paper with some interesting insights but lacking novelty\"}",
"{\"title\": \"Response\", \"comment\": \"I agree that stochastic ensemble is better than the plain ensemble empirically, but the gain is incremental and lacks theoretic support.\\n\\nAgain, I think the proposed method did not align with the initial goal. Your initial aim is to correct mode collapse problem in estimating posterior but the proposed method even did not estimate the posterior from Bayesian view. \\n\\nI will be glad to see the theoretic analysis in the updated version which I believe will make the paper more convincing.\"}",
"{\"title\": \"Response\", \"comment\": \"I agree that in the context of active learning, computational time is not the main constraint. However, I think if you're going to use more computational power for your method, then you should make the baselines have more computational power. In general, one would expect from a theoretical standpoint that ensembling more models will improve performance. Beluch et al. (2018) state \\\"The performance of the ensemble-based approach is only slightly impacted by the number of members\\\", which implies that performance was impacted, just slightly. One could argue that the impact of the MC-dropout on top of ensembling is only slight as well. I'm still not convinced of the importance or novelty of this method.\"}",
"{\"title\": \"thank you for your comments\", \"comment\": \"1. Again, the main difference between the classical ensembles, where one initializes the networks with random initializers (as done by Lakshminarayanan et al. and us as well) and the dropout approach is that our approach of dropout is *not* deterministic. That mask is sampled, so it will have a mask with some given probability.\\n\\\"With dropout, we sample binary variables for every input\\npoint and for every network unit in each layer (apart from\\nthe last one). Each binary variable takes value 1 with probability\\npi for layer i. A unit is dropped (i.e. its value is set\\nto zero) for a given input if its corresponding binary variable\\ntakes value 0. We use the same values in the backward\\npass propagating the derivatives to the parameters.\\\" \\n\\nFurthermore, I think one of the best explanations is given in this relatively new paper: https://arxiv.org/pdf/1805.09208.pdf\\n\\n2. Yes adversarial training is very much on our future work, so is comparing with other ensemble methods. We thank again for your reviews, we hope to include these in a future version of the paper.\"}",
"{\"title\": \"response to clarifications\", \"comment\": \"1. Sorry I do not follow the argument. Obviously, fixing the seed of the random number generator leads to a deterministic output of the neural network, but this also holds for dropout. As one can fix the seed for sampling mini-batches or initializing the weights, one can also fix the seed for sampling from the Bernoulli distribution which leads to the exact same dropout mask. Besides that, why does one want to fix the seed for the method by Lakshminarayanan et al. at the first place? So the question why the method by Lakshminarayanan et al. is to be considered a deterministic ensemble technique remains open.\\n\\n\\n2. The paper only compares an ensemble of neural networks where each networks is initialized with different random weights to an ensemble of neural networks where each networks is trained with Dropout MC. It is true that the first method resembles the work by Lakshminarayanan et al. (even though it is not clear whether they use the same proper scoring rule). However, one of Lakshminarayanan et al. main contributions was also to show that one gets better ensembles if one uses adversarial training rather than just random initializations. That should also be possible for the setting here right?\\n\\nFurther more there also other baseline ensemble techniques, such as bootstrapping (e g. https://pdfs.semanticscholar.org/dde4/b95be20a160253a6cc9ecd75492a13d60c10.pdf) or the already above mentioned snapshot ensembles. Again, due to the little novelty, I believe at least a thorough comparison to existing ensemble methods would make the paper stronger.\"}",
"{\"title\": \"clarifications\", \"comment\": \"1.\\nThe reason the ensembling technique he describes is deterministic is because the method he describes \\\"We treat the ensemble as a uniformly-weighted mixture model and combine the predictions..\\\" is \\nusing the average of M models each trained with different initializations. \\nThe random shuffling he talks about I believe has to do with how the classifier is fed the data, which in the case of active learning this is done progressively via the the acquisition function, i.e. BALD or Entropy. \\n\\nInitialising a NN with random weights (say w ~ N(0,1)) will always yield the same result, given we set the same seed for our random number generator. \\nThen averaging over 3 such models (our ensemble number M), will always yield the same result given we know what our seeds are. \\n\\nNow compare this to MC-dropout, where you are sampling a binary vector from a Bernoulli distribution (your dropout mask) and applying this to each layer of your NN. This is stochastic because you\\u2019re **sampling** a binary vector from a Bernoulli distribution with parameter p_i = 0.3 for example. (dropout probability of 30%). And by sampling we mean, we construct a Bernoulli distribution with a known seed, from which we sample binary variables for the hidden units, corresponding to the probability of that unit being 'on' or 'off'. \\n\\nWhat Lakshminarayanan et al. did was to compare their ensembling method to MC-dropout for regression and classification, but not for an active learning scenario and not by combining both methods and finally not for a small dataset problem (and as stated before, by that we mean, starting with very little labelled examples)\\n\\n\\n2. \\nI would say that Beluch et al. showed that uncertainty can be better quantified via ensembles *in the active learning case* whereas what Lakshminarayanan et al. showed was that ensembles can give better uncertainty than dropout. \\n\\nWe proposed to combine the two for the active learning scenario and small dataset problem and prove superiority in both accuracy and uncertainty in comparison to both independent evaluation, ensembles OR MC-dropout. We were inspired in our analysis by the approach of Lakshminarayanan et al. of using the Brier score for assessing the uncertainty quality (and found out that our stochastic ensemble does have a better Brier score than the deterministic ensemble) as well as looking at the classification accuracy on unseen dataset/distributions (NotMNIST)\\n\\nSo we did compare to both approaches (Lakshminarayanan et al. and Beluch et al.), I'm not sure which method you would like us to compare against?\"}",
"{\"title\": \"response to novelty, related work and experiments\", \"comment\": \"* About stochastic vs deterministic ensembles *\\n\\nI do not understand why the work by Lakshminarayanan et al. is considered to be a deterministic ensemble technique. It uses random initialization of the neural network parameters as well as random shuffling of the data points to obtain diverse models (see Section 2.4 in the corresponding paper). Furthermore, Lakshminarayanan et al. already mentioned that one also can use other common ensemble techniques such as bagging, however it might lead to suboptimal behavior.\\n\\n\\n* About the novelty * \\n\\nThe paper only shows that the mode collapse happens for Dropout MC, which apparently leads to overconfident predictions and that ensembles help to cure this. As already said above, this has been investigate by others before (Lakshminarayanan et al.).\\n\\nThe novel part of the paper, compared to Beluch et al. which showed that ensembles of neural network perform better than a Bayesian neural network trained with Dropout MC, is to also train the individual ensemble components with Dropout MC. I still think the novelty is not sufficient for acceptance and that the paper would be much more convincing if the authors could present a comparison to these existing methods.\\n\\n\\n* About related work *\\n\\nSorry, I was not very clear about that. What I mean is that there are other active learning settings such as Bayesian optimization or Bandits, where one learns a model while collecting data. Previous work has also explored to use Bayesian neural networks in these settings, but, on hindsight, this is arguably only loosely connected to this approach.\"}",
"{\"title\": \"Ensemble and Bayesian dropout as posterior approximation\", \"comment\": \"We thank the reviewer for its valuable and insightful comments.\\n\\nWe are reviewing our work from a theoretical point of view and will update the paper very soon to reflect this. \\n \\nEven though we have not yet proved the above, we have empirically showed that the benefit of DEBAL over plain ensemble methods consists of a better representation of uncertainty, that is paramount in active learning. By better we mean \\n1) more meaningful and closer to what one would expect (Fig 4 & Fig 6 (right)) \\n2) better calibrated (Fig 6 (left)). Our initial aim was not to compare stochastic ensembles with deterministic or single MC-dropout but to correct for the mode collapse issue in estimating posteriors with MC-dropout. We have empirically shown that adding ensembles to this, greatly improves the MC-dropout technique and outperforms the deterministic ensembles as well. \\nWe had similar doubts about the benefit of adding MC-Dropout to an ensemble. Therefore, we contrasted the performance of DEBAL against the plain ensemble method and showed empirically that DEBAL gives rise to better measures of uncertainty. \\nFinally, as we strive to make our assumptions hold theoretically, we agree that adding theoretical Bayesian support to our method is of great importance if we are to further improve the understanding of Bayesian deep learning.\\n\\nFor your final point, although Beluch et al. (2018) showed better performance for ensembles, we have shown this in the context of a small dataset problem (i.e. the size of the final dataset acquired during AL is only a small fraction of the entire available unlabelled dataset), which we believe is more relevant to the real world cases if AL is to become a widely used method. \\n\\nAs for the figures, we are aware of this and will try to make them more clear in a revised version.\"}",
"{\"title\": \"Paper novelty, experiments and alternative methods\", \"comment\": \"We thank our second reviewer for his comments. We first refer to your main comments and then answer each point in part.\\n\\nThe work of Lakshminarayanan et al. indeed showed that deterministic ensembles can improve on the performance of MC-dropout techniques and provides a foundation for ours. And as Beluch et al. (2018) showed, this can be valuable in an active learning setting. However, our work differs in two major ways: \\n\\ni) We focus on showing the uncertainty representation in these methods suffer from overconfident predictions and that combining the two methods into a stochastic ensemble can be of great benefit and improve on the quality of the uncertainty.\\n\\nii) We believe the true novelty to be in applying them in an active learning setting, and in particular on a small dataset problem (i.e. the size of the final dataset acquired during AL is only a small fraction of the entire available unlabelled dataset). As you mentioned, data is notoriously scarce and deep learning methods rarely work on small dataset problems. \\n\\nWe thank the reviewer for pointing us to the work of Huand et al. Indeed this is an interesting method that would allow us to most likely achieve similar or better results with less computational overhead. This is definitely something we will consider for future work, but it is somehow out of the main scope of the paper, which was to show the power of combining MC-dropout with ensembles in the active learning setting. Taking into account more advanced ensemble methods is definitely of interest.\\nIn terms of the Bayesian Optimization literature, this is definitely of interest if we are to focus on hyper-parameter tuning for our models, but we fail to see the connection of the work you mentioned to our active learning examples. Our focus was not on fine-tuning our models. \\n\\nIn relation to your specific points, we answer these below: \\n\\n1) Gal has already showed in his PhD thesis that MC-Dropout almost always performs best in terms of prediction accuracy and uncertainty quality assessment when compared to alternative Bayesian neural network approaches such as Probabilistic Back Prop and other variants of stochastic gradient MCMC methods. The aim of our paper was to improve upon MC-Dropout in the context of active learning, which would invariably translate into better performance w.r.t. other Bayesian NN approaches.\\n2) Beluch et al. (2018) showed that going beyond 3 networks in their deterministic ensemble method does not add any significant improvements in terms of performance. Therefore, we used this number when benchmarking against their method.\\n3) The aim of the paper was to improve upon the state-of-the-art in active learning for the image classification task. We specifically chose this task due to its relevance to the real world especially in the medical imaging industry. We agree that a more comprehensive study could be done in order to asses the viability of our method for ML tasks other than image classification. As for other neural network architectures, we chose the one used in the benchmarked methods. \\n4) Results are averaged over 5 multiple independent runs. We will include both this and confidence scores in a revised version of our paper.\"}",
"{\"title\": \"Accuracy and uncertainty for small dataset took priority over computational time\", \"comment\": \"We thank our third reviewer for his comment.\\n\\nWe do understand your concern about the significant increase in computational time. However, we believe that in the context of active learning, the main problem is not related to computational power, rather to the scarcity of data. Therefore, a better way of making the most out of little data is critical. For example, a 10 \\\\% increase for only 300 samples acquired, could make a huge difference in a critical field where active learning is most valuable. We believe that this is exactly what we manage to achieve with our method and this comes as a result of a better representation of uncertainty during AL. \\n\\nFurthermore, Beluch et al. (2018) showed that going beyond 3 networks in their deterministic ensemble method does not add any significant improvements in terms of performance. Therefore we use 3 stochastic ensembles for our method.\\n\\nAs for the novelty of this method, although it seems more like an engineering solution, we believe that it makes a significant contribution in the field of deep active learning.\"}",
"{\"title\": \"Ensemble of MC-Dropout models is not an approximation of the posterior\", \"review\": \"The authors propose to use the combination of model ensemble and MC dropout in Bayesian deep active learning. They empirically show that there exists the mode collapse problem due to the MC dropout which can be regarded as a variational approximation. The authors introduce an ensemble of MC-Dropout models with different initialization to remedy this mode collapse problem.\\n\\nThe paper is clearly written and easy to follow. It is interesting to empirically show that the mode collapse problem of MC-Dropout is important in active learning. \\n \\nThe major concern I have is that the ensemble of MC-Dropout models is not an approximation of the posterior anymore. Each MC-Dropout model is an approximation of the posterior, but the ensemble of them may not. Therefore, it is a little misleading to still call it Bayesian active learning. Also, the ensemble of MC-Dropout models does not have the theoretic support from the Bayesian perspective. \\n\\nThe motivation for the proposed method is to solve the mode collapse problem of MC-Dropout, but using ensemble loses the Bayesian support benefit of MC-Dropout. So it seems not a reasonable solution for the mode collapse problem of MC-Dropout. It is not clear to me why we need to add MC-Dropout to the ensemble. What is the benefit of DEBAL over an ensemble method if both of them do not have Bayesian theoretic support?\\n\\nIn terms of the empirical results, the better performance of DEBAL compared to a single MC-Dropout model is not supervising as Beluch et al. (2018) already demonstrated that an ensemble is better than a single MC-Dropout. While the improvement of DEBAL compared to an ensemble is marginal but is reasonable.\\n\\nThe labels of figures are hard to read.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Paper contains only little novelty and the experiments are not sufficiently thorough\", \"review\": \"The paper shows that Bayesian neural networks, trained with Dropout MC (Gal et al.) struggle to fully capture the posterior distribution of the weights.\\nThis leads to over-confident predictions which is problematic particularly in an active learning scenario.\\nTo prevent this behavior, the paper proposes to combine multiple Bayesian neural networks, independently trained with Dropout MC, to an ensemble.\\nThe proposed method achieves better uncertainty estimates than a single Bayesian neural networks model and improves upon the baseline in an active learning setting for image classification.\\n\\n\\nThe paper addresses active deep learning which is certainly an interesting research direction since in practice, labeled data is notoriously scarce. \\n\\nHowever, the paper contains only little novelty and does not provide sufficiently new scientific insights.\\nIt is well known from the literature that combining multiply neural networks to an ensemble leads to better performance and uncertainty estimates.\\nFor instance, Lakshminarayanan et al.[1] showed that Dropout MC can produce overconfident wrong prediction and, by simply averaging prediction over multiple models, one achieves better performance and confidence scores. Also, Huand et al. [2] showed that by taking different snapshots of the same network at different timesteps performance improves.\\nIt would also be great if the paper could related to other existing work that uses Bayesian neural networks in an active learning setting such as Bayesian optimization [3, 4] or Bandits[5].\", \"another_weakness_of_the_paper_is_that_the_empirical_evaluation_is_not_sufficiently_rigorous\": \"1) Besides an comparison to the work by Lakshminarayanan et. al, I would also like to have seen a comparison to other existing Bayesian neural network approaches such as stochastic gradient Markov-Chain Monte-Carlo methods.\\n\\n 2) To provide a better understanding of the paper, it would also be interesting to see how sensitive it is with respect to the ensemble size M. \\n \\n 3) Furthermore, for the experiments only one neural network architecture was considered and it remains an open question, how the presented results translate to other architectures. The same holds for the type of data, since the paper only shows results for image classification benchmarks.\\n \\n 4) Figure 3: Are the results averaged over multiple independent runs? If so, how many runs did you perform and could you also report confidence intervals? Since all methods are close to each other, it is hard to estimate how significant the difference is.\\n \\n\\n\\n\\n[1] Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles\\nBalaji Lakshminarayanan, Alexander Pritzel, Charles Blundel\\nNIPS 2017\\n\\n[2] Gao Huang and Yixuan Li and Geoff Pleiss and Zhuang Liu and John E. Hopcroft and Kilian Q. Weinberger\", \"snapshot_ensembles\": \"Train 1, get {M} for free}\\n ICLR 2017\\n\\n[3] Bayesian Optimization with Robust Bayesian Neural Networks\\n J. Springenberg and A. Klein and S.Falkner and F. Hutter\\n NIPS 2016\\n \\n[4] J. Snoek and O. Rippel and K. Swersky and R. Kiros and N. Satish and N. Sundaram and M. Patwary and Prabhat and R. Adams\\n Scalable Bayesian Optimization Using Deep Neural Networks\\n ICML 2015\\n\\n[5] Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling\\n Carlos Riquelme, George Tucker, Jasper Snoek\\n ICLR 2018\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clear writing but only mild improvement for computational cost.\", \"review\": \"This paper introduces a technique using ensembles of models with MC-dropout to perform uncertainty sampling for active learning.\\n\\nIn active learning, there is generally a trade-off between data efficiency and computational cost. This paper proposes a combination of existing techniques, not just ensembling neural networks and not just doing MC dropout, but doing both. The improvements over basic ensembling are rather minimal, at the cost of extra computation. More specifically, the data efficiency (factor improvement in data to achieve some accuracy) of the proposed method over using a deterministic ensemble is around just 10% or so. On the other hand, the proposed algorithm requires 100x more forward passes when computing the uncertainty (which may be significant, unclear without runtime experiments). As a concrete experiment to determine the importance, what would be the accuracy and computational comparison of ensembling 4+ models without MC-dropout vs. 3 ensembled models with MC-dropout? At the point (number of extra ensembles) where the computational time is equivalent, is the learning curve still better?\\n\\nThe novelty of this method is minimal. The technique basically fills out the fourth entry in a Punnett square.\\n\\nThe paper is well-written, has good experiments, and has a comprehensive related work section.\\n\\nOverall, this paper is good, but is not novel or important enough for acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1xq3oR5tQ | A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs | [
"Jack Lindsey",
"Samuel A. Ocko",
"Surya Ganguli",
"Stephane Deny"
] | The vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system. | [
"visual system",
"convolutional neural networks",
"efficient coding",
"retina"
] | https://openreview.net/pdf?id=S1xq3oR5tQ | https://openreview.net/forum?id=S1xq3oR5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gtv5p0JE",
"B1gEC2BlA7",
"BJx4_OpjpQ",
"B1xQQuaiam",
"rkxrOwpjpQ",
"rkls0Uasp7",
"H1gQhLTipX",
"r1eiULeC3Q",
"HJgMwJe5hm",
"Hyg8wO1rhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544637025437,
1542638796485,
1542342763680,
1542342683470,
1542342508593,
1542342355282,
1542342315442,
1541437010532,
1541173081556,
1540843613784
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper734/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper734/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper734/Authors"
],
[
"ICLR.cc/2019/Conference/Paper734/Authors"
],
[
"ICLR.cc/2019/Conference/Paper734/Authors"
],
[
"ICLR.cc/2019/Conference/Paper734/Authors"
],
[
"ICLR.cc/2019/Conference/Paper734/Authors"
],
[
"ICLR.cc/2019/Conference/Paper734/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper734/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper734/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper advocates neuroscience-based V1 models to adapt CNNs. The results of the simulations are convincing from a neuroscience-perspective. The reviewers equivocally recommend publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"a good step in bringing computational neuroscience and CNNs together\"}",
"{\"title\": \"My concerns were addressed\", \"comment\": \"Thank you for your rebuttal. The new analyses rigorously address my concerns.\"}",
"{\"title\": \"Continued\", \"comment\": \"[4. Whitened inputs can probably be represented more efficiently in a network trained with L2-regularization and/or SGD]\\nWe thank the reviewer for this interesting explanation that we could directly verify in our model. In the case of a deep brain network, where the retinal processing is quasi-linear, the increased separability allowed by the retinal pre-processing could be due to (1) the linear whitening or (2) the slightly non-linear part of the retinal response (3) a combination of both linear and non-linear processing. To distinguish between these hypotheses, we replaced in a new experiment the true retinal processing by its best linear approximation, retrained the brain network on the output of this linearized retina and tested whether separability was as good as with the true retinal processing. We found that the first layer trained on the output of the linearized retinal representation was indeed much better than the first layer of the control network (trained directly on natural images) at separating classes of objects, suggesting that the linear whitening operation done by the retina is indeed especially transformable into linearly separable representations by a downstream neural network. We added this analysis in the appendix.\"}",
"{\"title\": \"Thank you for thoughtful suggestions - Clarity and quantification concerns adressed\", \"comment\": \"We thank the reviewer for their positive appreciation, and for their thoughtful suggestions that we took into account.\\n\\n[Main concern - Quantifications for Fig 2A,B and C]\", \"fig2a_and_2b\": \"We quantified our result about the isotropy of retinal filters, by measuring orientedness of the filters at the retinal output and in V1 on 10 different instantiations of the network. We show that the retinal filters are significantly more isotropic than RFs in both the control network (without bottleneck) and the V1 filters.\\nFig 2C (Hubel and Wiesel hypothesis): We quantified the anisotropy of the weight filter pooling from the retina to form oriented filters in V1, and again we found that these weight matrices are significantly oriented, confirming the hypothesis of Hubel and Wiesel in our model that simple cells in V1 are built from pooling successive center-surround filters from the preceding layer in a row. These quantifications are now referred to in the main text and detailed in the appendix.\\n\\n[1. Bottleneck is a sufficient constraint, not a necessary constraint]\\nWe agree with the reviewer that we cannot eliminate other hypotheses about the origin of isotropic filters in the biological retina. We soften the claim everywhere in the manuscript as suggested. Note that we mention in the discussion our attempt to reproduce the results of Karklin and Simoncelli (2011), who could successfully obtain center-surround RFs with a constraint on firing rate, but with a different objective function (information preservation). However, we cannot totally eliminate the possibility that, with different network parameters, we could also obtain center-surround Rfs with a constraint on total firing rate under this object recognition objective.\\n\\n[2. Cell types in the retina]\\nWe understand the confusion of the reviewer and we clarify this point both here and in the manuscript. The retina is organized in layers of different types of neurons (photoreceptors, bipolar cells, ganglion cells, etc), and in each of these layers the neurons can be subdivided in many subtypes: these subtypes we referred to as types in the article, which might have led to the confusion. For instance in the primate retina, there exist 20 subtypes of ganglion cells, each with different receptive field size, polarity, non-linearities etc (Dacey 2004). Each of these subtypes tile the entire visual field like a convolutional channel in a layer of a CNN and each cell of a given subtype has a stereotyped receptive field, so this is why there is a strong analogy between channels in our model and biological subtypes. We wanted to test whether the emergence of center-surround RFs in the retina was a consequence of reducing the number of channels (i.e. subtypes), or the number of neurons, and this is why we carried out the experiment described in the section \\u201cEmergence of ON and OFF populations of center-surround cells in the retina\\u201d where we untied the weights of the network. We find that the emergence of center-surround is not specifically dependent on the number of types that we allow (it only depends on the number of cells that we allow), and furthermore we find that the cells naturally arrange in two clusters of ON and OFF cells when we allow them to differentiate, which is an interesting side-observation because the polarity axis is the first axis of ganglion cell subtype classification in the retina.\\n\\n[3. implications of the nonlinearity being due to the first or second stage]\\nThis analysis was directed at retinal experts who might want to test our predictions, and that might wonder what stage of the linear processing is responsible for the non-linearity of the retinal response as we decrease neural resources allocated to the brain. The two main sources of non-linearity in the retina are thought to be the inner retina rectification (bipolar and amacrine cells, corresponding to the first stage non-linearity in our model) and the ganglion cell rectification (corresponding to the second stage non-linearity in our model). We find that both stages become more non-linear as we decrease brain resource, which makes an interesting prediction for experimentalists. We clarified the motivation for this analysis and the corresponding prediction that it makes in the manuscript.\"}",
"{\"title\": \"Thanks\", \"comment\": \"We thank the reviewer for their positive assessment. We agree that some of these observations could be expected but it is the first time to our knowledge that cross-layer and cross-species differences in early visual representations are recapitulated and accounted for in a single unified model of the visual system.\\n\\nWe thank the reviewer for this interesting reference that we added as an example of how deep networks can be used to model the human visual system.\"}",
"{\"title\": \"Complementary analyses [2]\", \"comment\": \"[More density in terms of results and ablation studies.]\\nWe have now quantified our main result about the isotropy of retinal filters, by measuring isotropy of the filters at the retinal output and in V1, on 10 different instantiations of our network. We show that the retinal filters are significantly more isotropic in the network with bottleneck than in the control network without bottleneck. We also find that the filters in V1 are significantly more oriented than in the retina-net. We added these quantifications in the appendix. (see App A)\", \"ablation_study\": \"Following a suggestion of Rev. 1, we investigated in depth whether, in the case of a deep brain network, where the retinal processing is quasi-linear, the increased separability allowed by the retinal pre-processing is due to the linear (whitening) or non-linear aspects of the retinal pre-processing (fig 3F). To test this we replaced the actual retinal processing by its best linear approximation (i.e. this is a functional ablation). We then retrained the brain network on the output of this linearized retina and tested whether separability was as good as with the real slightly non-linear retinal processing. We found that separability in the very first layer of the VVS-net was already much stronger than that of a VVS-net trained directly on natural images (without retina). This result demonstrates that linear whitening does indeed play a crucial role in making the representation easily transformable by subsequent layers into a linearly separable representation. We added this analysis in the amin text and appendix. (see section 4.2 and App E)\\n\\nFinally, we did a new analysis suggesting even more strongly that the retinal representation is indeed a trade-off between feature extraction and linear transmission of visual information. For 10 instantiations of a network with a retinal bottleneck containing 4 channels, we represented the linearity of each of these 4 channels against the linear separability of object categories obtained from these representations. We found, across all networks, a systematic negative correlation between linearity and linear separability across all 4 channels. This result strongly suggests that extracting features and transmitting visual information are indeed two competing goals shaping retinal representations. We added these new results in the appendix. (see section 4.2 and App D)\\n\\n\\nIn summary, we have added 5 new complementary analyses to the article, making it substantially denser in terms of both results and ablation studies.\"}",
"{\"title\": \"Complementary analyses [1]\", \"comment\": \"We thank the reviewer for the positive comments about the paper, and try to address his/her concerns below.\\n\\n[Role of normalization mechanisms.]\\nLocal normalization is an ubiquitous source of non-linearity in the visual system (see Geisler and D.G. Albrecht 1992 for an example in the cortex, and Deny et al 2017 for an example in the retina), and in ML they are used to enhance contrast of images (Lyu and Simoncelli 2008, http://www.cns.nyu.edu/pub/lcv/lyu08b.pdf) and image compression algorithms (Balle et al, ICLR 2017 https://openreview.net/forum?id=rJxdQ3jeg). We thus tested the robustness of our main results to a more realistic model of the visual system with local normalization by adding local normalization at every layer of the network. We found that receptive fields still emerge as center-surround in the retina-net and as oriented in our model of V1 when we put a bottleneck. Interestingly, the normalization slightly degraded the performance of the network on the task for all parameter settings we tried. We added this complementary analysis in the main text and appendix of the article. (see section 3.1 and App C)\\n\\n[Distinction between simple and complex cells.]\\nIt is an interesting question to ask whether neurons in our model of the VVS are more similar to simple or complex cells. To test this, we performed a one-step gradient ascent on the neural activity of VVS neurons with respect to the image, starting from several random initial images. If the neurons were acting as simple cells (i.e. are approximately linear in the stimulus), we would expect all optimized stimuli to converge to the same preferred stimulus. On the other hand, if the cells were complex (i.e. OR function between several preferred stimuli), we would expect the emergent preferred stimuli to depend on the exact initialization. Interestingly, we found that most neurons in the first layer of the VVS-net behaved a simple cells, whereas most neurons in the second layer of the VVS-net behaved as complex cells. Note that in biology, both simple and complex cells are found in V1. These results expose the fact that anatomical regions of visual cortex involve multiple nonlinearities and hence may map onto more than one layer of our simple model. Indeed, V1 itself is a multilayered cortical column, with LGN inputs coming in to layer 4, and layer 4 projecting to layers 2 and 3. Simple cells are predominantly found in layer 4 and complex cells are predominantly found in layers 2 and 3. These observations bolster the interpretation that biological V1 may correspond to multiple layers in our model. We added these interesting results and observations in the main text and appendix. (see section 3.1 and App B)\\n\\n[Role of the thalamo-cortical loop.]\\nThe recurrence of the thalamo-cortical loop also plays an essential role in the computations of the visual system and it would be very important to understand the role of this recurrence. However, in this study we chose to focus on explaining the discrepancy between the geometry of RFs in the retina and V1, and on the differences in the non-linearity of retinal processing across species. To model these phenomena, our approach was to find the simplest model that would yield those two phenomena. Intriguingly, our results show that modeling the thalamo-cortical loop is not necessary to yield the emergence of center-surround receptive fields in the retina-net and oriented receptive fields in the V1 layer (first layer of VVS-net). Moreover, note that a number of studies of the visual system using those same simplifying assumptions (simple neurons, no recurrence, Yamins et al 2014, Cadena et al. 2017) have found good agreement of the predictions of their models with the visual system. Also, almost all classical efficient coding theories going back to Atick and Redlich and Olshausen and Field assume no top-down feedback, so it is important to first compare to them using a model without top-down feedback as a first step.\"}",
"{\"title\": \"Review of The effects of neural resource constraints on early visual representations\", \"review\": \"EDIT: On the basis of revisions made to the paper, which significantly augment the results, the authors note: \\\"the call for papers explicitly mentions applications in neuroscience as within the scope of the conference\\\" which clarifies my other concern. For both of these reasons, I have changed my prior rating.\\n\\nThis paper is focused on a model of early visual representation in recognition tasks drawing motivation from neuroscience. Overall the paper is an interesting read and reasonably well written (albeit with some typos). The following addresses the positives and negatives I see associated with this work:\", \"positives\": [\"There are relatively few efforts that focus heavily on more shallow models with an emphasis on representation learning, and for this reason this paper fills an important space\", \"The connections to neuroscience are interesting albeit it's unclear the extent to which this is the mandate of the conference\", \"The most interesting bit of the paper to me is the following: \\\"A bottleneck at the output of the retina yielded center-surround retinal RFs\\\" - it is somewhat a foregone conclusion that most networks immediately converge on orientation selective and color opponent representations. That this model produces isotropic filters is a very interesting point.\"], \"negatives\": [\"The work feels a little bit shallow. It would have been nice to see a bit more density in terms of results and ablation studies. This also relates to my second point.\", \"Given the focus on early visual processing, there seems to be a missed opportunity in examining the role of normalization mechanisms or the distinction between simple and complex cells. If the focus resides in the realm of neuroscience and early visual representation, there is an important role to these mechanisms. e.g. consider the degree of connectivity running from V1 to LGN vs. LGN to V1.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A great paper and a solid contribution to computational neuroscience\", \"review\": \"I enjoyed reading this paper which is a great example of solid computational neuroscience work.\\n\\nThe authors trained CNNs under various biologically-motivated constraints (e.g., varying the number of units in the layers corresponding to the retina output to account for the bottleneck happening at the level of the optic nerve or varying the number of \\\"cortical\\\" layers to account for differences across organisms). The paper is clear, the hypotheses clearly formulated and the results are sound. The implications of the study are quite interesting suggesting that the lack of orientation selectivity in the retina would arise because of the bottleneck at the level of the optic nerve. The continuum in terms of degree of linearity/non-linearity observed across organisms at the level of the retina would arise as a byproduct of the complexity/depth of subsequent processing stages. While these results are somewhat expected this is to my knowledge the first time that it is shown empirically in an integrated computational model.\", \"minor_point\": \"The authors should consider citing the work by Eberhardt et al (2016) which has shown that the exists an optimal depth for CNNs to predicting human category decisions during rapid visual categorization.\\n\\nS. Eberhardt, J. Cader & T. Serre. How deep is the feature analysis underlying rapid visual categorization? Neural Information Processing Systems, 2016.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting application of deep neural nets to neuroscience.\", \"review\": \"This paper addresses questions about the representation of visual information in the retina. The authors create a deep neural network model of the visual system in which a single parameter (bandwidth between the \\u201cretina\\u201d and \\u201cvisual cortex\\u201d parts) is sufficient to qualitatively reproduce retinal receptive fields observed across animals with different brain sizes, which have been hard to reconcile in the past.\\n\\nThis work is an innovative application of deep neural networks to a long-standing question in visual neuroscience. While I have some questions about the analyses and conclusions, I think that the paper is interesting and of high quality.\\n\\nMy main concern is that the authors only show single examples, without quantification, for some main results (RF structure). For example, for Fig. 2A and 2B, an orientation selectivity index should be shown for all neurons. A similar population analysis should be devised for Fig 2C, e.g. like Fig 3 in [1]\", \"minor_comments\": \"1. Page 4: \\u201cThese results suggest that the key constraint ... might be the dimensionality bottleneck..\\u201d: The analyses only show that the bottleneck is *sufficient* to explain the differences, but \\u201cthe key constraint\\u201d also implies *necessity*. Either soften the claim or provide control experiments showing that alternative hypotheses (constraint on firing rate etc.) cannot explain this result in your model.\\n\\n2. I don\\u2019t understand most of the arguments about \\u201ccell types\\u201d (e.g. Fig. 2F and elsewhere). In neuroscience, \\u201ccell types\\u201d usually refers to cells with completely different connectivity constraints, e.g. excitatory vs. inhibitory cells or somatostatin vs. parvalbumin cells. But you refer to different CNN channels as different \\u201ctypes\\u201d. This seems very different than the neuroscience definition. CNN channels just represent different feature maps, i.e. different receptive field shapes, but not fundamentally different connectivity patterns. Therefore, I also don\\u2019t quite understand what you are trying to show with the weight-untying experiments (Fig. 2E/F).\\n\\n3. It is not clear to me what Fig. 3B and the associated paragraph are trying to show. What are the implications of the nonlinearity being due to the first or second stage? \\n\\n4. Comment on Fig 3F: The center-surround RFs probably implement a whitening transform (which is linear). Whitened inputs can probably be represented more efficiently in a network trained with L2-regularization and/or SGD. This might explain why the \\u201cquasi-linear\\u201d retina improves separability later-on.\\n\\n[1] Cossell, Lee, Maria Florencia Iacaruso, Dylan R. Muir, Rachael Houlton, Elie N. Sader, Ho Ko, Sonja B. Hofer, and Thomas D. Mrsic-Flogel. \\u201cFunctional Organization of Excitatory Synaptic Strength in Primary Visual Cortex.\\u201d Nature 518, no. 7539 (February 19, 2015): 399\\u2013403. https://doi.org/10.1038/nature14182.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
B1ethsR9Ym | Look Ma, No GANs! Image Transformation with ModifAE | [
"Chad Atalla",
"Bartholomew Tam",
"Amanda Song",
"Gary Cottrell"
] | Existing methods of image to image translation require multiple steps in the training or modification process, and suffer from either an inability to generalize, or long training times. These methods also focus on binary trait modification, ignoring continuous traits. To address these problems, we propose ModifAE: a novel standalone neural network, trained exclusively on an autoencoding task, that implicitly learns to make continuous trait image modifications. As a standalone image modification network, ModifAE requires fewer parameters and less time to train than existing models. We empirically show that ModifAE produces significantly more convincing and more consistent continuous face trait modifications than the previous state-of-the-art model. | [
"Computer Vision",
"Deep Learning",
"Autoencoder",
"GAN",
"Image Modification",
"Social Traits",
"Social Psychology"
] | https://openreview.net/pdf?id=B1ethsR9Ym | https://openreview.net/forum?id=B1ethsR9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1x94rMxeE",
"r1xLJAL4aX",
"BJl7SNponX",
"HkghQKgonX"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544721714060,
1541856733686,
1541293114535,
1541241124050
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper733/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper733/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper733/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper733/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- The paper tackles an interesting and relevant problem for ICLR: guided image modification of images (in this case of facial attributes).\\n- The proposed method is in general well-explained (although some details are lacking)\\n \\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- The training set of faces and associated attributes were annotated using a pre-trained model which introduced a bias into the annotations used for training the method.\\n- The experimental results weren't convincing. The qualitative results showed no clear advantage of the proposed method and the quantitative comparison to StarGAN only considered two attribute manipulations and only found a statistically significant different in performance for one of those.\\nThe second weakness was the key determining factor in the AC's final recommendation. \\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nThere were no major points of contention and no author feedback.\\n \\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nThe reviewers reached a consensus that the paper should be rejected.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting problem, but experimental results aren't promising\"}",
"{\"title\": \"Authors propose a trait modification network that is trained as a standalone auto-encoder and is able to model continous trait interpolations in the latent space. The network requires less parameters than competitors and the training process less painful than other approaches based on GANs.\", \"review\": [\"The construction of the training dataset is clearly flawed by the use of an automatic algorithm that would certainly introduce a strong bias and noisy labels. Even though the dataset is supposed to encode continuous traits, the validation with human subjects is performed in a binary fashion.\", \"I miss more formality in the presentation of the methodology. Figure 3. does not seem very self-explanatory, nor does the caption. Which is the dimensionality of the input trait vector?. I assume the input would be the trait ratings predicted by the human subjects. However in the experiments training seems to be done with a maximum of two traits. This makes me wonder how the dense part of the network can handle the dimensionality blow-up to match the latent space dimensionality without suffering from overfitting. I would appreciate some disussion regarding this.\", \"While I appreciate a section reasoning why the method is supposed to work, those claims should be backed with an ablation study in the experimental section.\", \"The qualitative results show a few examples which I find very hard to evaluate due to the low-resolution of the predictions. In both traits there seems to be the same facial features modified and I can't find much difference between trustworthy and aggresssive (the labels could be swapped and I would have the same opinion on the results). I miss additional trait examples that would make clearer if the network is learning something besides generating serious and happy faces.\", \"The qualitative comparison with StarGAN seems unfair, as if one checks the original paper their results are certainly more impressive than what Figure 5 shows.\", \"The authors show only two traits in the experiments which makes me a bit suspicious about the performance of the network with the rest of traits. The training datset considers up to 40 traits.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Modify social attributes on face images results here in low-quality images\", \"review\": \"The paper is about changing the attributes of a face image to let it look more aggressive, trustworthy etc. by means of a standalone autoencoder (named ModifAE). The approach is weak starting from the construction of the training set. Since continue social attributes on face images does not exist yet, CelebA dataset is judged by Song et al. (2017) with continuous face ratings and use the predicted ratings to train ModifAE. This obviously introduces a bias driven by the source regression model. The hourglass model is clearly explained. The experiments are not communicating: the to qualitative examples are not the best showcase for the attributes into play (attractive, emotional), and requires to severely magnify the pdf to spot something. This obviously show the Achille\\u2019s heel of these works, i.e., working with miniature images. Figure 5, personally, is about who among modifAE and stargan does less bad, since the resulting images are of low quality (the last row speaks loud about that)\\nQuantitative results are really speaking user tests, so I will cal it as they are, user tests. They work only on two attributes, and show a reasonable advantage over stargan only for one attribute.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"an autocoding model as an alternative to GANs for continuous trait image modifications\", \"review\": \"Overview and contributions: The authors propose the ModifAE model that is based on an autoencoder neural network for continuous trait image modifications. ModifAE requires fewer parameters and less time to train than existing generative models. The authors also present experiments to show that ModifAE produces more convincing and more consistent continuous face trait modifications than the current baselines.\", \"strengths\": \"1. Nice presentation of the model.\\n2. Good experiments to justify improved running time and fewer number of parameters.\", \"weaknesses\": \"1. I am not completely convinced by the results in Figure 4. It doesn't seem like the model is able to pick up on subtle facial expressions and generate them in a flexible manner. In fact, the images look very very similar regardless of the value of the traits. Furthermore, the authors claim that \\\"In general, as she becomes more emotional, her smile increases, and as she is made more attractive, her smile increases as well, as smiling subjects are judged as more attractive\\\". I believe attractiveness and emotions are much more diverse and idiosyncratic than just the size of her smile...\\n2. From Figure 5 it seems like ModifAE generates images that are lower in quality as compared to StarGAN. Can the authors comment on this point? How can ModifAE be improved to generate higher-quality images?\", \"questions_to_authors\": \"1. Weakness points 1 and 2.\\n2. This did not affect my rating, but I am slightly concerned by the labelings as seen in Figure 1. Is it reasonable to infer traits like \\\"trustworthy\\\", \\\"attractive\\\", \\\"aggressive\\\", \\\"responsible\\\" from images? Are these traits really what we should be classifying people's faces as, and are there any possible undesirable/sensitive biases from the dataset that our models could learn? I would like to hear the author's opinions on the ethical implications of these datasets and models. \\n\\nPresentation improvements, typos, edits, style, missing references:\\nNone\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Syxt2jC5FX | From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference | [
"Randall Balestriero",
"Richard Baraniuk"
] | Nonlinearity is crucial to the performance of a deep (neural) network (DN).
To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute value activation functions and max-pooling.
In particular, DN layers constructed from these operations can be interpreted as {\em max-affine spline operators} (MASOs) that have an elegant link to vector quantization (VQ) and $K$-means.
While this is good theoretical progress, the entire MASO approach is predicated on the requirement that the nonlinearities be piecewise affine and convex, which precludes important activation functions like the sigmoid, hyperbolic tangent, and softmax.
{\em This paper extends the MASO framework to these and an infinitely large class of new nonlinearities by linking deterministic MASOs with probabilistic Gaussian Mixture Models (GMMs).}
We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural ``hard'' VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding ``soft'' VQ inference problems.
We further extend the framework by hybridizing the hard and soft VQ optimizations to create a $\beta$-VQ inference that interpolates between hard, soft, and linear VQ inference.
A prime example of a $\beta$-VQ DN nonlinearity is the {\em swish} nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was developed ad hoc by experimentation.
Finally, we validate with experiments an important assertion of our theory, namely that DN performance can be significantly improved by enforcing orthogonality in its linear filters.
| [
"Spline",
"Vector Quantization",
"Inference",
"Nonlinearities",
"Deep Network"
] | https://openreview.net/pdf?id=Syxt2jC5FX | https://openreview.net/forum?id=Syxt2jC5FX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xoBMnkl4",
"HylVtNN767",
"HkxSzkX767",
"SJetq0GQ67",
"HygJDAvzp7",
"SJxaGCDMTX",
"HyebQg_c2m",
"SylBlmHqhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544696386755,
1541780604088,
1541775116824,
1541774992721,
1541729878595,
1541729812727,
1541206040825,
1541194476727
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper732/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper732/Authors"
],
[
"ICLR.cc/2019/Conference/Paper732/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper732/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper732/Authors"
],
[
"ICLR.cc/2019/Conference/Paper732/Authors"
],
[
"ICLR.cc/2019/Conference/Paper732/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper732/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Dear authors,\\n\\nAll reviewers liked your work. However, they also noted that the paper was hard to read, whether because of the notation or the lack of visualization.\\n\\nI strongly encourage you to spend the extra effort making your work more accessible for the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Nice piece of work\"}",
"{\"title\": \"Answer to Reviewer1\", \"comment\": \"We thank the reviewer for their constructive comments. We agree that our soft-VQ extension is an important piece of the puzzle that is necessary to ensure a solid foundation of the 'MASO-view' of deep neural networks.\\n\\nRegarding the clarity of presentation, we agree that our streamlined treatment of the MASO background, while self-contained, is quite terse. The reason is very short page limit allowed for the submission. We hope that the reader will find our new results compelling enough that they will refer to [1] for additional background information and insights.\\n\\nRegarding the experiments and visualization, we also had to make hard choices due to space limitations. We decided that repeating visualizations from [1] using a Soft-VQ partitioning would be less useful than a detailed derivation and explanation of the internal Hard/Soft/Beta-VQ processes and how they lead to new nonlinearities. We certainly plan to include many more visualizations in our conference presentation, should the paper be accepted.\\n\\nFinally, we feel that our extension of the deterministic MASO framework is more than incremental, since it opens the door to a range of new applications, improvements, and theoretical questions that go far beyond the scope of [1]. For some examples, please see our reply to Reviewer 3.\\n\\n[1] Mad Max: Affine Spline Insights into Deep Learning https://arxiv.org/abs/1805.06576\"}",
"{\"title\": \"Adding cite for [1]\", \"comment\": \"[1] Mad Max: Affine Spline Insights into Deep Learning https://arxiv.org/abs/1805.06576\"}",
"{\"title\": \"Logical continuation of existing work\", \"review\": \"At the core of this paper is the insight from [1] that a neural network layer constructed from a combination of linear, piecewise affine and convex operators can be interpreted as a max-affine spline operator (MASO). MASOs are directly connected to vector quantization (VQ) and K-means clustering, which means that a deep network implicitly constructs a hierarchical clustering of the training data during learning. This paper now substitutes VQ with probabilistic clustering models (GMMs) and extends the MASO interpretation of a wider range of possible operations in deep neural networks (sigmoidal activation functions, etc.).\\n\\nGiven the detailed treatment of MASOs in [1], this paper is a logical continuation of this approach. As such, it may seem only incremental, but I would consider it as an important piece to ensure a solid foundation of the 'MASO-view' on deep neural networks.\\n\\nMy main criticism is with respect to the quality and clarity of the presentation. Without reading in detail [1] it is very difficult to understand the presented work here. Moreover, compared to [1], a lot of explanatory content is missing, e.g. [1] had nice visualisations of the resulting partitioning on toy data.\\n\\nClearly, this work and [1] belong together in a larger form (e.g. a journal article), I hope that this is considered by the authors.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Answer to Reviewer2\", \"comment\": \"We thank the reviewer for their constructive comments. We have revised the manuscript accordingly.\\n\\nWe have simplified the background (Section 2) by removing the superfluous l (layer) superscript. This reworking clarifies the definition and operation of the MASO.\\n\\nWe have defined $[\\\\pi^{(l)}]_{k,t}$ in the early part of Theorem 2 and then derived (5).\\n\\nThe assumption on the bias value needed for Proposition 1 is indeed only needed for that particular result. We have highlighted this (i) in the second paragraph following Proposition 1 and (ii) in the sentence immediately after Theorem 2.\\n\\nWe have highlighted that Proposition 2 is a standard result (and added references) and motivated its presence in order to unify all the different VQs under a single optimization problem. Adding an Entropy regularization to the original optimization problem then enables us to interpolate between hard,soft and linear VQ.\"}",
"{\"title\": \"Answer to Reviewer3\", \"comment\": \"We thank the reviewer for their constructive comments. We respond to each below in detail.\\n\\nTECHNICAL CONTRIBUTIONS\\nWe briefly review our four primary technical contributions.\\n[C1] We extend the deterministic max-affine spline operator (MASO) framework for deep networks (DNs) developed in (Balestriero & Baraniuk, ICML2018) to a probabilistic Gaussian mixture model (GMM). \\n[C2] We extend the deterministic vector quantization (VQ) spline partition of the MASO framework to a probabilistic, soft VQ that enables us to derive from first principles and unify most of the known DN nonlinearities, including nonlinear and nonconvex ones such as the softmax and sigmoid gated linear unit.\\n[C3] By interpolating between hard and soft inference, we derive a new class of beta-VQ activation functions. In particular, a beta-VQ version of the hard ReLU activation is the \\u201cSwish\\u201d nonlinearity, which offers state-of-the-art performance in a range of computer vision tasks but was proposed ad hoc through experimentation.\\n[C4] We rigorously prove that orthogonal filters endow a DN with an attractive inference capability. Orthogonal filters enable a DN to perform efficient, tractable, jointly optimal VQ inference across all units in a layer. This is in contrast to non-orthogonal DNs, which support optimal VQ only on a per-unit basis. Previous works have studied orthogonality only empirically.\\n\\nORTHOGONALIZATION\\nAs noted in contribution [4] above, orthogonalization has already been applied in deep learning, but it has typically been applied ad hoc with little to no theoretical justification. In our paper, we have justified orthogonalization from the novel point of view of inferring the VQ partition of each of the unit outputs in a DN layer. In a standard DN, each unit output computation is performed independently from the other units. This absence of \\u2018\\u2019lateral connections\\u2019\\u2019 can lead to two problematic situations: on the one hand redundant information in a feature map or on the other hand incomplete representation of the input. We demonstrate that an elegant solution to both problems is to enforce orthogonality. We have added a statement after Theorem 4 regarding how orthogonalization has potential applications of independent interest outside of deep learning, for example in factorial GMMs and HMMs.\\n\\nFUTURE DIRECTIONS AND DISCUSSIONS\\nWe agree with the reviewer that our hard/soft VQ perspective opens up many new directions to both understand and improve DNs. Here are several new directions that we could discuss further in the revised paper or at the conference:\\n[F1] VQ penalization: Given our explicit (and differentiable) formulas for the soft VQ, we can derive new kinds of penalties to apply during learning. For example, we could penalize an overconfident-VQ (as measured by the joint likelihood of the unit VQ representation of the layer input), which is symptomatic of over-fitting.\\n[F2] Leaning new activation functions: The state-of-the-art Swish nonlinearity has learnable parameter that enables it to range from ReLU to sigmoid gated linear unit to linear. We can further augment this parametrization to enable us to reach the sigmoid unit as well. This will enable us to use learning experiments to investigate the conjecture that ReLU like nonlinearities are best for early DN layers while sigmoid-like nonlinearities are best for later layers.\\n[F3] We can use the VQ and the per-unit VQ-based likelihood to create DNs that detect outliers and perform model selection.\\n[F4] Alternative soft-VQ regularization: Replacing the Shannon Entropy regularization in (7) with a different penalty could yield new classes of nonlinear activation functions.\"}",
"{\"title\": \"Somewhat incremental work, but well posited and written.\", \"review\": \"This work extends the applicability of the spline theory of deep networks explored in previous works of Balestriero/ Baraniuk. The previous works setup DNs as layer-wise max-affine spline operators (MASOs) and recovers several non-linearities practically used as special cases of these MASOs. The previous works already recover RELU variants and some downsampling operators that the current submission characterizes as \\\"hard\\\" quantization.\\n\\nThe major contribution of this work is extending the application to \\\"soft\\\" quantization that recovers several new non-linear activations such as soft-max. It is well-known that the k-means algorithm can be considered as a run of an EM algorithm to recover the mean parameters of a gaussian mixture model. The \\\"hard\\\" to \\\"soft\\\" transformation, and any interpolation in between follows from combining this insight with the previous works. As such there isnt a major technical contribution imho in this work. Furthermore, the presented orthogonalization for easier inference has been used before in many works, some of which this submission also cites, most importantly in the previous work of Balestriero/ Baraniuk that this submission extends. \\n\\nNevertheless there is value in novel results that may follow from previous works in a straightforward but non-trivial fashion, as long as it is well-presented and thoroughly researched and implication well-highlighted. This paper does that adequately, so I will suggest weak accept. Furthermore, this work could spark interesting future works and fruitful discussions at the ICLR. It is well-written and the experimental evaluation is adequate.\\n\\nI would suggest a couple of ways to possibly improve the exposition. The paper is somewhat notation heavy. When considering single layers, the superscript for the layer could be dropped in favor of clarity. I would suggest moving the definition of MASOs to the main text, and present Proposition 8 in some form in the main text as well. To a reader not familiar with previous works, or with splines, this could be helpful. Use of orthogonalization could be highlighted not just a tool for tractability but also regularization. For inference on GMMs, it corresponds to a type of variational inference, which could be mentioned.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper extends the max-affine spline operator (MISO) interpretation of a class of deep neural networks to cover a wider class of activation functions, namely the sigmoid, hyperbolic tangent and softmax. The authors also use the formulation to create a family of models that interpolates between hard and soft non-linearities.\", \"review\": \"Interesting work, extending previous work by Balestriero and Baraniuk in a relevant and non-trivial direction. The presentation could be cleaner and clearer,\\n\\nThe paper contains solid work and contributes to an interesting perspective/interpretation of deep networks. The presentation is reasonably clear, although somewhat cluttered by a large number of subscripts and superscripts, which could be avoided by using a more modular formulation; e.g., in equation (1), when referring to a specific layer l, the superscript l can be dropped as it adds no useful information. By the way, when l is first used, just before equation (1), it is undefined, although the reader can guess what it stands for.\\n\\nIt is not clear why $[\\\\pi^{(l)}]_{k,t}$ is defined after equation (5), as these quantities are not mentioned in Theorem 2. Another confusion issue is that it is not clear if the assumption made in Proposition 1 concerning is only valid there of if it is assued to hold elsewhere in the paper.\\n\\nProposition 2 is simply a statement of the well-known relationship between between soft-max (a.k.a. logistic regression) and the maximum entropy principle (see, for example, http://www.win-vector.com/dfiles/LogisticRegressionMaxEnt.pdf).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
SkeK3s0qKQ | Episodic Curiosity through Reachability | [
"Nikolay Savinov",
"Anton Raichuk",
"Damien Vincent",
"Raphael Marinier",
"Marc Pollefeys",
"Timothy Lillicrap",
"Sylvain Gelly"
] | Rewards are sparse in the real world and most of today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known "couch-potato" issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. The code is available at https://github.com/google-research/episodic-curiosity/. | [
"deep learning",
"reinforcement learning",
"curiosity",
"exploration",
"episodic memory"
] | https://openreview.net/pdf?id=SkeK3s0qKQ | https://openreview.net/forum?id=SkeK3s0qKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJxVRzyWgE",
"SygYM7vnCm",
"Skg-kmPhRm",
"HygF5-whA7",
"B1g0V-P20X",
"BkxDl6b9R7",
"HyeN9kl9CQ",
"HkeNioTSR7",
"Bylb69aHRQ",
"BkegueeB0m",
"HJeOyO8XRX",
"SkefzFUvpm",
"B1eCNi44T7",
"Skl0AOEVa7",
"r1lgD_E46Q",
"S1gFJ_4Npm",
"SygAOUV4pm",
"rJgv2u0kTX",
"Skl3TfOkpQ",
"rJeSKyuo3Q",
"H1e1YGI9h7",
"rJgSKGHcnm",
"rkxvwZU1nX",
"H1leA4f127"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544774347763,
1543430929092,
1543430872711,
1543430545255,
1543430453684,
1543277807322,
1543270283867,
1542998939812,
1542998712589,
1542942824186,
1542838240188,
1542052105767,
1541847861666,
1541847254291,
1541847127567,
1541847009307,
1541846646141,
1541560495110,
1541534404110,
1541271421448,
1541198454869,
1541194365065,
1540477278861,
1540461767525
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper731/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper731/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper731/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The authors present a novel method for tackling exploration and exploitation that yields promising results on some hard navigation-like domains. The reviewers were impressed by the contribution and had some suggestions for improvement that should be addressed in the camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting idea with relevance to some common settings\"}",
"{\"title\": \"Authors' response [part 2/2]\", \"comment\": \"> The pre-training of the R-network is concerning, but you have already responded with preliminary results.\\n\\nNow we have the final results for the online training of R-network, please take a look at the updated Table 1, \\\"PPO + ECO\\\" line. The improvement is quite significant. Moreover, we gained an improvement not only in sparse-reward tasks, but also in dense-reward ones. We hypothesise that our online curiosity model creates less contradiction between the actual task and being curious. This is maybe because the training data for R-network is sampled from the current policy which is solving the task. On the other hand, it is different for Oracle: there is no adaptation for the task at hand, so there is potentially more contradiction. In both Dense tasks \\\"PPO + ECO\\\" outperforms Grid Oracle (and plain PPO).\\n\\n> I do share some of the concerns other reviewers have brought up about generality beyond navigation tasks\\n\\nWe have added a new experiment in the domain of learning locomotion out of curiosity. Please take a look at the supplementary section S1. The results look encouraging: a Mujoco ant learns to walk purely from maximizing our curiosity reward which is computed from the first-person view of the ant. Here are some videos: https://youtu.be/OYF9UcnEbQA (third-person view, for visualization only), https://youtu.be/klpDUdkv03k (first-person view, used by the curiosity module). We also have a comparison to some simple baselines in that section. Additionally, we have run an experiment with very sparse task reward: the ant is rewarded for escaping a large circle. In that task, our method also significantly outperforms the baselines.\"}",
"{\"title\": \"Authors\\u2019 response [part 1/2]\", \"comment\": \"We thank the reviewer for their work on the review. Please note that the paper has just been updated as per the request from AR2, here is an anonymous link to the updated paper: https://drive.google.com/open?id=1tUHfBwWWu6W2zuk-De0AyYWxuNQTWa0D . The reviewer\\u2019s questions are addressed below.\\n\\n> Won\\u2019t the shortest-path distance of the next observation always be 1 because it is immediately following the current step, and thus this results in a constant bonus?\\n\\nIn Section 2.2 we introduce a novelty threshold b_novelty for entering the memory buffer which addresses exactly this problem. This threshold implicitly discretizes the embedding space: only sufficiently novel observations are rewarded with a bonus. Very likely, if the current observation is in memory, the next one won't be considered novel. Not only the next observation won't receive a large bonus, it also won't enter the memory buffer. As time passes by, the agent will go further and further away from observations in memory, the reward bonus will increase and at some point exceed b_novelty -- and only then the observation will enter the buffer. Of course, right after that the reward will drop again. Please take a look at this reward visualization: https://youtu.be/mphIRR6VsbM\\n\\n> It would perhaps make more sense if you used a different aggregation, such average, in which case you would be giving bonuses to observations that are farther away from the past on average.\\n\\nWe have tried average in the past, it did not work well. The reason is probably that the average is not robust to outliers -- which are abundant as the visual similarity can't be perfect. \\n\\n> Also, while eventually this idea makes sense, it only makes sense within a single episode. If you clear the memory between episodes, then you are relying on some natural stochasticity of the algorithm to avoid revisiting the same states as in the previous episode. Otherwise, it seems like there is not much to be gained from actually resetting and starting a new episode; it would encourage more exploration to just continue the same episode, or not clear memory when starting a new episode.\\n\\nThe typical goal of RL is to maximize the reward throughout the current episode. The information from other episodes might be coming from a completely different environment/maze (unless you make an assumption that it is the same environment in every episode). If you visited some places in one maze, how would it help you to determine novelty of the current observation in another maze?\\n\\n> Section 2.2: You say you have a novelty threshold of 0 in practice, doesn\\u2019t this mean you end up always adding new observations to the memory?\\n\\nAs we have an additive factor beta = 0.5, the bonus b ends up in interval [-alpha/2, alpha/2]. Thus b_novelty = 0 is the middle of this interval and not all observations end up in the memory.\\n\\n> I do think you should rework your intuition. It seems to me what you are actually doing is creating some sort of implicit discretization of the observation space, and rewarding observations that you have not seen before under this discretization.\\n\\nThis is exactly what we are doing -- by introducing b_novelty we implicitly discretize the embedding space.\\n\\n> I like your grid oracle, as it acts as a baseline for using PPO and provides a point of reference for how well an exploration bonus could potentially be. But why aren\\u2019t grid oracle results put into your graphs? Your results look good and are very promising.\\n\\nAt the time of the submission, we didn\\u2019t include Grid-Oracle into the plots because otherwise it was hard to see the difference between the comparable methods (note that it is not fair to compare Oracle with them). In the latest version of our paper, we included it for all DMLab curves.\"}",
"{\"title\": \"Paper updated\", \"comment\": \"Thank you for your help in improving our research! We have performed the paper update (the log is written in a separate message addressed to all the reviewers). We also added one more experiment with the MuJoCo Ant. Here is an anonymous link to the updated paper: https://drive.google.com/open?id=1tUHfBwWWu6W2zuk-De0AyYWxuNQTWa0D .\"}",
"{\"title\": \"Paper updated\", \"comment\": \"As requested by AR2, an update has been performed on the paper. As the system was already closed by the time we have received the request, here is an anonymous link to the updated paper: https://drive.google.com/open?id=1tUHfBwWWu6W2zuk-De0AyYWxuNQTWa0D .\", \"the_approximate_log_of_the_update\": \"1. Introduced online training into the main text of the paper, reported all final 20M-step DMLab results for it.\\n2. Announced Mujoco Ant experiments in the abstract, introduction, experimental setup section in the main paper. Added those experiments as the first section of the supplementary material (as there is no more space for them in the main text). We also conducted an additional experiment with very sparse reward -- our method shows good results there as well.\\n3. Added reward and memory state visualization video to the main text.\\n4. Removed the inspiration for R-network training.\\n5. Updated all experiments in the supplementary till the final 20M steps of training. We also added an ablation study which substitutes the Comparator network with a simpler function and establishes that the Comparator network is an essential part of our approach.\\n6. Added computational considerations section to the supplementary.\\n7. Updated DMLab curves with the online version of our algorithm and added Grid Oracle curves as well.\\n8. Added randomized TV experiments to the supplementary.\\n\\nDear reviewers, please note that this is a partial update because of the time constraints -- but we will perform all the other updates (which are more minor than the ones above) that we have promised in the camera-ready version.\"}",
"{\"title\": \"Thanks for your hard work\", \"comment\": \"Thank you for your thorough response to my review!\\n\\nThe experiments on online training of the R-network are very encouraging and I'm very glad that this resulted in improvements in performance. \\n\\nThe extra MuJoCo ant locomotion experiments are interesting and I'm very much looking forward to reading the updated paper and seeing the final results of training in this task.\\n\\nI just want to point out that I'm very impressed by all the efforts made by the authors to address the comments raised in my review. They went above and beyond expected work in this rebuttal period!\\n\\nI believe the final version of the paper will be significantly stronger than the submitted version. \\nHence, I'm happy to increase my score to 8 after seeing the revised version of the manuscript.\"}",
"{\"title\": \"Great idea, promising results, some confusing text\", \"review\": \"This paper proposes a new method to give exploration bonuses in RL algorithms by giving larger bonuses to observations that are farther away (> k) in environment steps to past observations in the current episode, encouraging the agent to visit observations farther away. This is in contrast to existing exploration bonuses based on prediction gain or prediction error, which do not work properly for stochastic transitions.\\n\\nOverall, I very much like the idea, but I found many little pieces of confusing explanations that could be further clarified, and also some questionable implementation details. However the experimental results are very promising, and the approach should be modular and slotable into existing deep RL methods.\", \"section_introduction\": \"I\\u2019m confused by how you can define such a bonus if the memory is the current episode. Won\\u2019t the shortest-path distance of the next observation always be 1 because it is immediately following the current step, and thus this results in a constant bonus? You explain how you get around this in practice, but intuitively and from a high-level, this idea does not make sense. It would perhaps make more sense if you used a different aggregation, such average, in which case you would be giving bonuses to observations that are farther away from the past on average.\\n\\nAlso, while eventually this idea makes sense, it only makes sense within a single episode. If you clear the memory between episodes, then you are relying on some natural stochasticity of the algorithm to avoid revisiting the same states as in the previous episode. Otherwise, it seems like there is not much to be gained from actually resetting and starting a new episode; it would encourage more exploration to just continue the same episode, or not clear memory when starting a new episode.\\n\\nSection 2.2: You say you have a novelty threshold of 0 in practice, doesn\\u2019t this mean you end up always adding new observations to the memory? In this case, then it seems like your aggregation method of taking the 90th percentile is really the only mechanism that avoids the issue of always predicting a constant distance of 1 (and relying on the function approximator\\u2019s natural errors). \\n\\nI do think you should rework your intuition. It seems to me what you are actually doing is creating some sort of implicit discretization of the observation space, and rewarding observations that you have not seen before under this discretization. This is what would correspond to a shortest-path distance aggregation.\", \"experiments\": \"I like your grid oracle, as it acts as a baseline for using PPO and provides a point of reference for how well an exploration bonus could potentially be. But why aren\\u2019t grid oracle results put into your graphs? Your results look good and are very promising.\", \"other_points\": [\"The pre-training of the R-network is concerning, but you have already responded with preliminary results.\", \"I do share some of the concerns other reviewers have brought up about generality beyond navigation tasks, e.g. Atari games. To me, it seems like this method can run into difficulty when reachability is not as nice as it is in navigation tasks, for example if the decisions of the task followed a more tree-like structure. This also does not work well with the fact that you reset every episode, so there is nothing to encourage an agent to try different branches of the tree every episode.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Authors\\u2019 rebuttal\", \"comment\": \"To summarize, we did the following experiments for the rebuttal:\\n1. (AR2) Online training of R-network. The results improved significantly with respect to offline training.\\n2. (AR2) Experiments in other domains: learning locomotion out of first-person-view curiosity for ant in Mujoco. Preliminary results demonstrate the applicability of our approach to this task.\\n3. (AR3) Experiments in stochastic environments. For all the settings we tried, our method still works.\\n4. (AR1, AR2) Reward/memory state visualization. Those videos provide more insight for the reader about how our method works.\\n\\nWe will include the experiments and discussions into the camera-ready version of the paper.\\n\\nIf there are any remaining questions/concerns, we would be grateful if the reviewers raise them so that we could further improve our work.\"}",
"{\"title\": \"Authors\\u2019 response\", \"comment\": \"Thank you for your encouraging feedback! We will include the discussions into the camera-ready version of the paper.\"}",
"{\"title\": \"reply\", \"comment\": \"Thanks for your response. You have addressed my main concerns and have also added new results. I have increased my score as I think the paper is rather polished and well above the bar for acceptance at ICLR.\\n\\nI encourage the authors to integrate some of their discussions regarding scalability in the manuscript\"}",
"{\"title\": \"Update on experiments in other domains\", \"comment\": \"We have been able to learn Mujoco ant locomotion out of curiosity based on the first-person view: https://youtu.be/j_DToFnz9hQ (third-person view, for visualization only), https://youtu.be/8u_hbfEAo0w (first-person view, used by the curiosity module).\\n\\nFirst, let us describe the setup.\\n1. Environment: the standard mujoco environment is a plane with uniform texture on it -- nothing to be visually curious about. To fix that, we tiled the 400x400 floor into squares of size 4x4. Each tile is assigned a random texture from a set of 190 textures at the beginning of every episode. The ant is initialized at the random location of the plane. The episode lasts for 1000 steps (no action repeat is used). If the center of mass of the ant is above or below the predefined threshold -- the episode ends prematurely (standard condition which is often used).\\n2. Reward: curiosity reward for training and hyperparameter search, Grid Oracle reward for reporting results. Note: for the baselines we use Grid Oracle for hyperparameter search -- which puts them in a privileged position with respect to our method.\\n3. Observation space: for computing the curiosity reward, we only use a first-person view camera mounted on the ant (that way we can use the same architecture of our curiosity module as in DMLab). For policy, we use the standard body features from Ant-v2 in gym/mujoco (joint angles, velocities, etc.).\\n4. Action space: standard from Ant-v2 in gym/mujoco (continuous).\\n5. Basic RL solver: PPO (same as before in the paper)\\n6. Baselines: PPO on 0 reward (essentially random), PPO on constant +1 reward every step (optimizes for longer survival).\\n\\nSecond, we present preliminary results. After 6M steps a random policy gives a Grid Oracle reward of 1.42, survival-PPO gives 2.19, our method achieves 3.95. Qualitatively, random policy dies quickly ( https://youtu.be/WFtM8-h8jOA ), survival-PPO survives for longer but does not move much ( https://youtu.be/b9ClgXOHpqA ), our method moves around the environment (first-person view: https://youtu.be/8u_hbfEAo0w , third-person view: https://youtu.be/j_DToFnz9hQ ). Average performance for our method is good, but not all random seeds produce good performance (results above are averaged over 3 seeds). We\\u2019re currently running more seeds with the best hyperparameters discovered for each method and investigating how training stability could be improved. \\n\\nFinally, let us discuss the relation to some other works in the field of learning locomotion from intrinsic reward that we are aware of (non-exhaustive and preliminary list). The closest work in terms of task setup is this concurrent ICLR submission https://openreview.net/forum?id=rJNwDjAqYX . The authors demonstrate slow motion of the ant ( https://youtu.be/l1FqtAHfJLI?t=90 ) learnt from pixel-based curiosity only. Other works use state features (like joint angles etc.) for formulating intrinsic reward, not pixels -- which is a different setup. One work in this direction is another concurrent ICLR submission https://openreview.net/forum?id=SJx63jRqFm .\\n\\nWe hope this experiment addresses the valid concerns of the reviewer and demonstrates the generality of our method. We will include this experiment and the discussion into the camera-ready version of the paper.\"}",
"{\"title\": \"Update on online training\", \"comment\": \"After tuning online training of R-network, we obtained significantly improved results with respect to offline training: reward 26 -> 42 on Sparse in DMLab, reward 25 -> 41 on VerySparse in DMLab, reward 9 -> 20 on Sparse+Doors in DMLab. Also, results look qualitatively better now: offline training bumps into the walls quite often https://youtu.be/C5g10cUl7Ew -> online training almost doesn\\u2019t bump into the walls at all https://youtu.be/d2KiaWIJgfU.\\n\\nThus, the experimental results justify that collecting data from a policy is ultimately a better way to train the R-network (probably because randomly visited states may be very unbalanced relative to what an agent actually encounters).\\n\\nAlthough the online training experiment was on our roadmap, we would like to thank the reviewer for motivating us to do it sooner rather than later! We will include the online training experiments in the paper.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for their work on the review.\\n\\n> It would be good if the authors could further elaborate on the scalability of the method in terms of compute/memory requirements\\n\\nThe most computationally intensive parts of the algorithm are the memory reachability queries. Reachabilities to past memories are computed in parallel via mini-batching. We have shown the algorithm to work reasonably fast with a memory size of 200. For significantly larger memory sizes, one would need to better parallelize reachability computations -- which should be doable. Memory consumption for the stored memories is very modest (400 KB), as we only store 200 of 512-float-embeddings, not the observations.\\n\\n> and related to that if the implementation is cumbersome.\\n\\nThe implementation is relatively easy. We commit to publishing the source code if the paper is accepted -- this would make adoption even easier.\\n\\n> I didn\\u2019t understand well how the method avoids the issue of old memories leaving the buffer.\\n\\nForgetting is unavoidable if the storage size is limited. That said, not all old memories are erased. The distribution of memory age is geometric: so older memories are sparser than the recent ones, but still present. Please see our visualization of the memory state: https://youtu.be/mphIRR6VsbM. Please note that we denote memories by their location only for visualization purposes, the coordinates are not available to our method.\\n\\n> It seems for a large enough environment important observations will eventually become discarded causing a poor approximation of the curiosity bonus?\", \"this_is_true\": \"when revisiting a part of state space that the agent hasn\\u2019t been to for a long time, many of the memories from that region may have been discarded and the curiosity bonus may offer more reward for returning to these states. This should not be a problem though: the curiosity bonus would still provide some reactive incentive to move away from recent memories -- because recent memories are always well-represented. And, it is possible that it would be good to incentivise visiting states that haven\\u2019t been seen in a long time.\\n\\n> For the large scale experiments I would like to know more rough details of the number of the compute time needed for the method relative to the PPO baseline and the other baseline (e.g. number of nodes for example and how long they run approximately)\\n\\nPPO+ICM is 1.09x slower than PPO and PPO+EC (our method) is 1.84x slower than PPO. As for the number of parameters, R-network brings 13M trainable variables, while PPO alone was 1.7M and PPO+ICM was 2M. That said, we have spent almost no effort optimizing it in terms of speed/parameters, so it is likely easy to make improvements in this respect. It\\u2019s quite likely that we do not need a Resnet-18 for the R-network -- a much simpler model may work as well. In this paper, we just followed the setup for the R-network from prior work https://arxiv.org/abs/1803.00653 because it was shown to perform well, but there is no evidence that this setup is necessary. \\n\\n> Are there any potential issues with adapting the method on 2D environments like Atari? this could permit direct comparisons with several other recently proposed techniques in this area.\\n\\nWe haven't tried it for Atari, so it is hard to predict. That said, we try to focus on more visually complex environments. In Atari, there is always a danger that the method would exploit exact observation repeatability. One recent work https://arxiv.org/pdf/1606.04460.pdf estimated this repeatability to reach 60% in some games, and > 10% in many. This creates a dangerous incentive for the exploration algorithms to brute-force this vulnerability. On the other hand, in DMLab, such repeatability was estimated by the same work as < 0.1%.\\n\\n> The Grid-Oracle result is very interesting and a contribution on it\\u2019s own\\u2026 I think if possible it would be interesting to have an idea how fast this method converges (number of training steps)\\n\\nWe don\\u2019t include Grid-Oracle into the plots because otherwise it is hard to see the difference between the comparable methods (note that it is not fair to compare Oracle with them). That said, Oracle converges faster than any other method -- but requires privileged information. To give specific numbers, after 5M 4-repeated steps Grid-Oracle reaches approximately reward 40 in the \\\"Sparse\\\" environment, reward 35 in the \\\"Very Sparse\\\" environment and reward 20 in the \\\"Sparse+Doors\\\" environment. This is way higher than any other method in our study. We will include those numbers into the manuscript.\\n\\n> For some applications (e.g. aimed at sim-to-real transfer) the grid-oracle approach might be a good alternative to consider.\\n\\nThe Oracle could be useful in situations where additional information is available about the environment. However, it is not universal, so we have not focused on the possibility of taking advantage of privileged information in the current manuscript.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"We thank the reviewer for their work on the review and their clarification answer.\\n\\nFirst, we would like to point out that the stochastic environments are not a focus of our work. Couch-potato behaviour is not unique to stochastic environments. As we show in our experiments, perfectly normal deterministic environments could lead to such behaviour: partial observability or just hardness of future prediction can confuse the surprise-based ICM method (which was chosen as a baseline because it showed state-of-the-art results in visually-rich 3D environment ViZDoom in the prior work https://arxiv.org/pdf/1705.05363.pdf). The randomized TV example is explicitly labeled as a \\\"thought example\\\" in our paper. It was chosen for the sake of illustration. We will clarify this in the paper.\\n\\nSecond, our method can work in an environment where \\\"all the states provide stochastic next state\\\". We created a version of our \\u201cVerySparse\\u201d environment with a randomized TV on the head-on display. More precisely, the lower right quadrant of the first-person view is occupied with an image from a set of 10 images. The change of the image on the TV is initiated by an additional action, provided to the agent. Using this action leads to a random image from the set to be shown on the TV. Our method still works in this setting: https://youtu.be/UhF1MmusIU4 (preliminary result computed at 7M 4-repeated steps). Additionally, we tried showing random noise on the TV screen at every step -- and our method works there as well: https://youtu.be/4B8VkPA2Mdw.\"}",
"{\"title\": \"Authors' response [part 2/2]\", \"comment\": \"> The architecture does not include an RNN which makes certain things very surprising even though they shouldn't (e.g. firing, or moving around a corner, are specifically surprising for ICM) as they cannot be learnt, but perhaps if they had an RNN in the architecture these would be easy to explain? Would be interesting to see what are the authors thoughts on this (apart from their computational complexity argument they mention)?\\n\\nWe agree it would be interesting to include an RNN into the architecture. As the reviewer mentions, doing so may help with certain kinds of surprising events. This would be worth exploring, but for interpretability of results and connection with past literature we have focused on feedforward architectures. An RNN was not a part of the original ICM approach to computing the reward bonus: the next-state prediction was done based on a few recent frames. In that sense, we followed the reference implementation -- which, as we verify, reproduces the published results. In the follow-up https://pathak22.github.io/large-scale-curiosity/resources/largeScaleCuriosity2018.pdf, the authors didn\\u2019t use an RNN in the policy either (personal communication with the authors). \\n\\n> Having the memory contain only information about the current episode with no information transfer between episodes seems a bit strange to me, I would like to hear the motivation behind this?\\n\\nThe typical goal of RL is to maximize the reward throughout the current episode. The information from other episodes might be coming from a completely different environment/maze (unless you make an assumption that it is the same environment in every episode). If you visited some places in one maze, how would it help you to determine novelty of the current observation in another maze?\\n\\n> The fact that the memory is reset between episodes, and that the buffer is small, can mean that effectively the method implements some sort of complex pseudo count over meta-states per episode?\\n\\nYes, it might be possible to understand the approach in this way. To gain a better understanding of how it works in practice, we created a visualization of the rewards, memory states and the trajectory of the agent during the episode. Please take a look here: https://youtu.be/mphIRR6VsbM. The distribution of states in memory is geometric: older memories are sparser but some are still there. This is enough to learn a reasonable exploration strategy in our environments.\\n\\n> The embedding network is only trained during the pre-training phase and frozen during the RL task. This sounds a bit limiting to me: what if the agent starts exploring part of the space that was not covered during pre-training? Obviously this could lead to collapses when allowing to fine-tune it, but I feel this is rather restrictive. Again, I feel that the choice of navigation tasks did not magnify this problem, which would arise more in harder exploration tasks.\\n\\nPlease see our comment about online training and generalization above. We did not observe collapses in our new online training experiments, nor in most of our generalization experiments.\\n\\n> I think that alluding that their method is similar to babies\\u2019 behaviour in their cradle is stretched at best and not a constructive way to motivate their work\\u2026\\n\\nWe will remove this inspiration.\\n\\n> In Figure 6 and 7, all individual curves from each seed run are shown, which is a bit distracting. Perhaps showing the mean and std would be a cleaner and easier-to-interpret way to report these results?\\n\\nWe could re-do the plots if the reviewer wishes. However, we noticed some issues with the mean+-std kind of visualization as the distribution at each step is far from looking like a gaussian. In fact, it is clearly multimodal. For example, in Figure 6 it wouldn't be clear if mean < 1.0 means that the trained model doesn't always reach the goal or the training is unstable and some models reach the goal consistently while others fail consistently (the latter is actually the case for the baselines at some points during training).\"}",
"{\"title\": \"Authors' response [part 1/2]\", \"comment\": \"We thank the reviewer for their work on the review.\\n\\n> The tasks explored in this paper are all navigation based tasks, would this method also apply equally successfully to non-navigation domains such as manipulation?\\n\\nThis is a great question and we work on experiments in other domains. However, we would like to point out that the tasks we already have in the paper are both non-trivial and more visually complex than in many other works in the field of sparse-reward exploration.\\n\\n> My main concern is that the pre-training of the embedding and comparator networks directly depends on how good the random exploration policy is that collects the data. In navigation domains it makes sense that the random policy could cover the space fairly well, however, this will not be the case for more complex tasks involving more complex dynamics.\\n\\nWe agree that for more complex domains randomly collected data may be insufficient and online training of the R-network may be crucial. To address the concerns of the reviewer, we have implemented a version of our algorithm which performs training of R-network online (together with the policy). Preliminary results indicate that such training is possible and does not collapse. It produces results at least as good as pre-training. Thus, we are able to demonstrate that our approach can function with online training, offering the possibility of functioning in domains where collection of random data may be insufficient.\\n\\nIn addition, we would like to point out that the R-network (Embedding + Comparator) can generalize beyond what was seen in the pre-training stage. We have such an experiment in the supplementary section S3 \\\"R-network generalization study\\\". In particular, in Table S4, R-networks trained on levels \\\"Dense 1\\\" and \\\"Sparse + Doors\\\" generalize to the \\\"Very Sparse\\\" environment. The visual gap is quite significant: please compare https://youtu.be/C5g10cUl7Ew with https://youtu.be/9J4CzdOz60I, for example. All this is possible because R-network is solving a simple problem of comparing two observations given access to both observations at the same time.\\n\\nMoreover, in the real world, people typically hand-design the initial exploration policy even for the standard RL methods, let alone the model-based ones (and our method could be considered partially model-based). For example, please take a look at the recent work https://ai.googleblog.com/2018/06/scalable-deep-reinforcement-learning.html (which has just received the best paper award at the Conference on Robotic Learning). Another recent work from ICLR\\u201918 https://openreview.net/forum?id=BkisuzWRW also uses hand-crafted policy for the robotic manipulation task to collect data for training the inverse model of the environment.\\n\\n> It was surprising to me that the choice of k does not seem to be that important. As it implicitly defines what \\u201cnovelty\\u201d means for an environment, I would have expected that its value should be calibrated better. Could that be a function of the navigation tasks considered? \\n\\nThose values of k are still rather small. What we demonstrate in this experiment is that our method is not excessively sensitive to this parameter when it is chosen within a reasonable range.\\n\\n> The DMLab results are not great or comparable to the state-of-the-art methods, which may hinder interpreting how good the policies really are. This was perhaps a conscious choice given they are only interested in early training results, but that seems like a confound.\\n\\nAs far as we know, SOTA results are achieved by Impala https://arxiv.org/abs/1802.01561 at 1B steps (250M 4-repeated steps). We haven\\u2019t yet run our experiments at this scale: we use 20M 4-repeated steps in our PPO setup with 12 actors on GPU, which already takes 2 days to complete. Furthermore, being more sample efficient is an appealing property of more effective exploration as interactions with an environment might be costly in some environments.\"}",
"{\"title\": \"Response to the reviewers\", \"comment\": \"We thank the reviewers for their work and their valuable comments. We are happy that the reviewers find our method interesting and innovative (AR1, AR2), note that the paper is well-written and easy to understand (AR1, AR3), and mention that the experiments in our work are well-executed (AR1, AR2).\\n\\nAs for the reviewers\\u2019 questions, we would like to highlight two key points (including new interesting results):\\n1. Pretraining of R-network and its generalization: we further extended our work with online training, which is stable and gives significantly better results with respect to the pre-trained version. Moreover, we have evidence that the R-network generalizes well beyond the areas explored during training. It even generalizes between environments. This is because it \\\"simply\\\" needs to learn to meaningfully compare observations, not \\\"recognize\\\" observations. Please see our reply to AR2 for details.\\n2. Environment stochasticity: while it is not the focus of our work, we experimented with adding a strong source of stochasticity to the environment, and our method is reasonably robust to it. Please see our reply to AR3 for details.\\n\\nWe respond in detail to each of the reviewers individually in comments to their reviews. We will work on performing the proposed experiments and updating the paper accordingly.\"}",
"{\"title\": \"Clarification\", \"comment\": \"I agree there are quite a few papers about curiosity-driven approaches, but they are mainly heuristic approaches for deterministic settings. I would like the authors to motivate and clarify why they use this approach in the stochastic setting. The problem set up (couch-potato) to motivate the approach in this paper is not general enough. What if all the states provide stochastic next state? Then the current method breaks? The curiosity methods extend to the stochastic settings if the curiosity is derived based on distribution mismatch, if it is not, then as the authors also mentioned it results in the couch-potato problem.\\n\\nI agree that the authors put effort for their empirical study and showed improvement. But I am not sure the algorithmic idea behind this work provides sufficient contribution. I am willing to change my score if the authors can address these.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"> The authors miss the point why the curiosity-driven exploration approaches as in this work are in interest.\\n\\nIt would be helpful if the reviewer could be more specific here.\\n\\n> The problem mentioned in this paper can be also solved based on efficient exploration-exploration methods where the distribution of next states is considered rather than the samples themselves.\\n\\nCould the reviewer please provide references to such methods, demonstrated on visually-rich 3D environments?\\n\\n> In curiosity-driven approaches, if the predictability of the next state is considered, all methods are sentenced to failure in stochastic environments. The approach in this paper partially mitigate this problem but for a very specific environment setup,\\n\\nViZDoom and DMLab are standard benchmarks. We used the standard action sets for those benchmarks. Could the reviewer please elaborate more on what is very specific about our environmental setup? \\n\\n> but still, fails if the environment is stochastic.\\n\\nCould the reviewer please be more specific here? That is, how does the method fail in the case that the environment is stochastic?\"}",
"{\"title\": \"A simple novel idea for improving exploration in DRL\", \"review\": \"The main idea of this paper is to propose a heuristic method for exploration in deep reinforcement learning. The work is fairly innovative in its approach, where an episodic memory is used to store agent\\u2019s observations while rewarding the agent for reaching novel observations not yet stored in memory. The novelty here is determined by a pre-trained network that computes the within k-step-reachability of current observation to the observations stored in memory. The method is quite simple but promising and can be easily integrated with any RL algorithm.\\n\\nThey test their method on a pair of 3D environments, VizDoom and DMLab. The experiments are well executed and analysed.\", \"positives\": [\"They do a rigorous analysis of parameters, and explicitly count the pre-training interactions with the environment in their learning curves.\", \"This method does not hurt when dense environmental rewards are present.\", \"The memory buffer is smaller than the episode length, which avoids trivial solutions.\", \"The idea of having a discriminator assess distance between states is interesting.\"], \"questions_and_critics\": [\"The tasks explored in this paper are all navigation based tasks, would this method also apply equally successfully to non-navigation domains such as manipulation?\", \"My main concern is that the pre-training of the embedding and comparator networks directly depends on how good the random exploration policy is that collects the data. In navigation domains it makes sense that the random policy could cover the space fairly well, however, this will not be the case for more complex tasks involving more complex dynamics.\", \"It was surprising to me that the choice of k does not seem to be that important. As it implicitly defines what \\u201cnovelty\\u201d means for an environment, I would have expected that its value should be calibrated better. Could that be a function of the navigation tasks considered?\", \"The DMLab results are not great or comparable to the state-of-the-art methods, which may hinder interpreting how good the policies really are. This was perhaps a conscious choice given they are only interested in early training results, but that seems like a confound.\", \"The architecture does not include an RNN which makes certain things very surprising even though they shouldn't (e.g. firing, or moving around a corner, are specifically surprising for ICM) as they cannot be learnt, but perhaps if they had an RNN in the architecture these would be easy to explain? Would be interesting to see what are the authors thoughts on this (apart from their computational complexity argument they mention)?\", \"Having the memory contain only information about the current episode with no information transfer between episodes seems a bit strange to me, I would like to hear the motivation behind this?\", \"The fact that the memory is reset between episodes, and that the buffer is small, can mean that effectively the method implements some sort of complex pseudo count over meta-states per episode?\", \"The embedding network is only trained during the pre-training phase and frozen during the RL task. This sounds a bit limiting to me: what if the agent starts exploring part of the space that was not covered during pre-training? Obviously this could lead to collapses when allowing to fine-tune it, but I feel this is rather restrictive. Again, I feel that the choice of navigation tasks did not magnify this problem, which would arise more in harder exploration tasks.\", \"I think that alluding that their method is similar to babies\\u2019 behaviour in their cradle is stretched at best and not a constructive way to motivate their work\\u2026\", \"In Figure 6 and 7, all individual curves from each seed run are shown, which is a bit distracting. Perhaps showing the mean and std would be a cleaner and easier-to-interpret way to report these results?\", \"Overall, it is a simple and interesting idea and seems quite easy to implement. However, everything is highly dependent on how varying the environment is, how bad the exploration policy used for pre-training is, how good the embeddings are once frozen, and how k, action repeat and memory buffer size interact. Given that the experiments are all navigation based, it makes it hard for me to assess whether this method can work as well in other domains with harder exploration setups.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not enough motivation why crustily driven approach is in interest.\", \"review\": \"In this paper, the authors study the problem of exploration in RL when the reward process is sparse. They introduce a new curiosity based approach which considers a state novel if it was not visited before and is far from the visited states. They show that their methods perform better than two other approaches, one without curiosity-driven exploration and the second one a one-step curiosity-driven approach.\\n\\nThe paper is well-written and easy to follow. The authors motivate this work by bringing an example where the state observation might be novel but important. They show that if part of the environment just changes randomly, then there is no need to explore there as much as vanilla curiosity-driven approaches to suggest. The approach in this paper partially addresses this drawback of curiosity-driven approaches. The authors miss the point why the curiosity-driven exploration approaches as in this work are in interest. \\n\\nThe problem mentioned in this paper can be also solved based on efficient exploration-exploration methods where the distribution of next states is considered rather than the samples themselves. An efficient explorative/exploitative RL agent explores part of state space more if there is uncertainty in the reward and state distribution rather than not being able to predict a particular sample. In curiosity-driven approaches, if the predictability of the next state is considered, all methods are sentenced to failure in stochastic environments. The approach in this paper partially mitigate this problem but for a very specific environment setup, but still, fails if the environment is stochastic.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The authors propose an exploration bonus that is aimed to aid in sparse reward RL problems. The bonus is given by an auxillary network which tries to score whether a candidate observation is difficult to reach with respect to all previously observed novel observations which are stored in a memory buffer. The paper considers many experiments on complex 3D environments.\\n\\nThe paper is well written and very well illustrated. The method can be clearly understood from the 3 figures and the examples are nice. I think the method is interesting and novel and it is evaluated on a realistic and challenging problem.\\n\\nIt would be good if the authors could further elaborate on the scalability of the method in terms of compute/memory requirements and related to that if the implementation is cumbersome. I didn\\u2019t understand well how the method avoids the issue of old memories leaving the buffer. It seems for a large enough environment important observations will eventually become discarded causing a poor approximation of the curiosity bonus? For the large scale experiments I would like to know more rough details of the number of the compute time needed for the method relative to the PPO baseline and the other baseline (e.g. number of nodes for example and how long they run approximately)\\n\\nAre there any potential issues with adapting the method on 2D environments like Atari? this could permit direct comparisons with several other recently proposed techniques in this area.\\n\\nThe Grid-Oracle result is very interesting and a contribution on it\\u2019s own if similar results for complex 3D environments are not published anywhere else. It demonstrates well that exploration bonuses can help drastically in these tasks. I think if possible it would be interesting to have an idea how fast this method converges (number of training steps) and not just the final reward as reported in the tables. Indeed as a general problem the current number of training steps of any methods shown seem to indicate these techniques are too data hungry for non-simulated environments. For some applications (e.g. aimed at sim-to-real transfer) the grid-oracle approach might be a good alternative to consider. I would be interested to know if the authors had some thoughts on this.\\n\\nOverall I lean towards accept, the method is shown to work on relatively very complex problems in DMLab and VizDoom while most sparse reward solutions proposed are typically evaluated on relatively simpler and unrealistic tasks. I would consider to further increase my score if the authors can address some of the comments.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Authors' response\", \"comment\": \"Thank you for your interest! Indeed, the policy learns generalized exploration behaviour, because it has seen approximately 11000 unique environments (differing in layouts and sets of textures) during 20M steps of training. All those environments are generated procedurally by the DMLab engine. Moreover, DMLab has a mechanism to ensure uniqueness and disjoint train/validation/test splits. However, please keep in mind that all those environments are still generated from some distribution implemented inside the DMLab engine.\"}",
"{\"comment\": \"Excellent work! I was wondering if the authors would be able to provide some intuition for a phenomenon that I find a bit puzzling in this paper. I noticed that this algorithm is tested on a set of hold-out levels that are not seen during training. Given my understanding of the approach, I am not exactly sure why this would work at all. My understanding is that the proposed algorithm encourages exploration by rewarding the visitation of states that are distant from the ones that have been seen during each episode in training. It is sensible that this would encourage policies to learn to explore in the training environments, since they are receiving rewards for seeing rare states and learning to move towards those, but it is not clear why this would work in unseen environments. Why would policies that move towards \\\"rare\\\" states in the training set, work at all on a separate testing set? I suspect that perhaps the policies are learning some sort of generalized exploration behavior due to the wide variety of environments seen during training? It would be great if the authors could shed more light on this.\", \"title\": \"Mechanism Behind Generalization of Exploration?\"}"
]
} |
|
rkgK3oC5Fm | Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods | [
"Apratim Bhattacharyya",
"Mario Fritz",
"Bernt Schiele"
] | For autonomous agents to successfully operate in the real world, the ability to anticipate future scene states is a key competence. In real-world scenarios, future states become increasingly uncertain and multi-modal, particularly on long time horizons. Dropout based Bayesian inference provides a computationally tractable, theoretically well grounded approach to learn different hypotheses/models to deal with uncertain futures and make predictions that correspond well to observations -- are well calibrated. However, it turns out that such approaches fall short to capture complex real-world scenes, even falling behind in accuracy when compared to the plain deterministic approaches. This is because the used log-likelihood estimate discourages diversity. In this work, we propose a novel Bayesian formulation for anticipating future scene states which leverages synthetic likelihoods that encourage the learning of diverse models to accurately capture the multi-modal nature of future scene states. We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting. | [
"bayesian inference",
"segmentation",
"anticipation",
"multi-modality"
] | https://openreview.net/pdf?id=rkgK3oC5Fm | https://openreview.net/forum?id=rkgK3oC5Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xKaJ256N",
"B1l2XxSgeV",
"B1gmsDaUyV",
"HJlkz38N1V",
"ryxnXzpQk4",
"rkehUL_aCm",
"BkgzBPe7Rm",
"r1lJGvgm0X",
"ryxs9IxQRQ",
"SJxRqrgQC7",
"rkxqXnMJTQ",
"ryl4znyKnm",
"BJxAQMND37"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1559048128559,
1544732707535,
1544112026539,
1543953415023,
1543914019767,
1543501395555,
1542813497698,
1542813446546,
1542813330966,
1542813078237,
1541512225954,
1541106700247,
1540993573672
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper730/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/Authors"
],
[
"ICLR.cc/2019/Conference/Paper730/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper730/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper730/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Code and Data\", \"comment\": \"The code and data is available here: https://github.com/apratimbhattacharyya18/seg_pred\"}",
"{\"metareview\": \"This paper proposes a method to encourage diversity of Bayesian dropout method. A discriminator is used to facilitate diversity, which the method deal with multi-modality. Empirical results show good improvement over existing methods. This is a good paper and should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A good paper to improve diversity of existing Bayesian deep learning methods\"}",
"{\"title\": \"Thank you and Further Clarifications about the Top 5% Criterion.\", \"comment\": \"Thank you for your response. We are glad that our clarifications and changes have made the paper better. Regarding the Oracle Top 5% criterion: It is true that for a single test sequence, the Oracle Top 5% criterion measures if in the \\\"longer\\\" future there are some predicted trajectories that are still close to the ground truth. However, note that it is the Top 5% of a limited \\u201cbudget\\u201d of 100 predictions and this criterion is averaged over the test-set. Therefore, to obtain a low Top 5% error, our model must generate likely (due to limited budget) future trajectories for all test-set examples (because it is averaged across the test-set) that are close to the ground-truth. In other words, the model must be able to predict the likely future outcomes. A model which assigns low probability to likely futures will not be able to generate future trajectories corresponding to the ground truth in the limited budget of 100 samples per test examples for the vast majority of the test-set sequences -- leading to a high Top 5% error. Therefore, the Top 5% error can measure the uncertainty. It has also been used in prior work for a relative comparison of techniques (Lee et al. (2017) and Bhattacharyya et al. (2018b)). We will better motivate the Oracle Top 5% criterion in the final version. Furthermore, note that we also include the Conditional log-likelihood (CLL) metric in Table 3 as an additional metric to measure uncertainty.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for the rebuttal. I think the added clarifications, also the ones given to R1, improve the paper even further. It clearly remains in the \\\"top 50% of accepted papers\\\" group for me.\"}",
"{\"title\": \"Nice rebuttal\", \"comment\": \"I would first like to congratulate the reviewers for the ir rebuttal and the clarifications. I do believe that the clarifications and changes made the paper better. My only problem is with using the top 5% predictions to capture the uncertainty. I understand the point of using the top 5% predictions, but I am still not entirely convinced that this measures the uncertainty, in that uncertainty is the capacity of the model to predict the various future outcomes. Evaluating the top 5%, however, measures if in the \\\"longer\\\" future there are some predicted trajectories that are still close to the ground truth, which is clearly a very different concept. That said, I do not have a better proposal, so, for now, I rest my case.\"}",
"{\"title\": \"Overview of Updates in the Revision.\", \"comment\": [\"In Section 3.4 of the main paper, we clarify that the ResNet based architecture in Figure 2 is the architecture used for street scene prediction.\", \"In Appendix E, Tables 11 and 12, we clearly illustrate the reduction in variational parameters enabled by our novel Weight Dropout scheme.\", \"Figure 3 in the main paper has been updated to include both the mean accuracy and the standard deviation across 10 splits of 1000 MNIST test set examples -- confirming the advantage of our proposed synthetic likelihoods.\", \"Table 1 (and corresponding text) in the main paper has been updated to include results at additional time-steps. These results and the results in Table 3, confirm that our model outperforms state-of-the-art approaches.\", \"In Appendix C, Table 6, we additionally compare our approach to that of Luc et. al. (2017) using the same Dilation 10 method to generate training sequences.\", \"In Appendix C, Table 5, additional training details have been added.\", \"Finally, all typos that were pointed out have been corrected.\"]}",
"{\"title\": \"Typos Corrected.\", \"comment\": \"We would like to thank AnonReviewer3 for recognizing that our model leads to better results and can capture the multi-modal nature of future scenes. We have addressed the typos that were pointed out. We would be happy to answer any remaining questions.\"}",
"{\"title\": \"Clarifications Provided.\", \"comment\": [\"We would like to thank AnonReviewer2 for recognising that our method is the first work w.r.t. future prediction with a principled treatment of uncertainty. We are pleased that the results in Section 4 (and the Appendix) are convincing.\", \"We now address the concerns and provide clarifications in detail,\", \"Detailed description of the training process should be provided for better reproducibility.\", \"We completely agree and we have added additional details in Appendix C Table 5. We will make the code available at the time of publication.\", \"it is not completely clear to me why the future vehicle odometry is provided as an input, in addition to past odometry and past segmentation confidences. I assume this would not be present in a real-world scenario?\", \"Conditioning on the future odometry is important as the sequences are in the frame of reference of the vehicle. The future path of the vehicle thus has a impact on the future observed sequence. In real world scenarios, the future odometry can be seen as a candidate planned path of the vehicle. Thus, this would help the vehicle obtain more informative predictions conditioned on candidate paths and help the vehicle decide on the optimal path.\", \"The differences in Figure 4.\", \"Figure 4 depicts the long-term uncertainty calibration of our method and baselines -- how well the predicted probability of a class 0.6sec into the future corresponds to the observed frequency. Uncertainty calibration of long-term predictions is a very important but difficult to achieve property (as we cannot directly optimize for it). Bayesian inference provides the only reliable approach to obtain calibration uncertainties. In Figure 4, we see that standard dropout bayesian inference approach of Kendall & Gal (2017) already leads to improved results over the non-Bayesian ResG-Mean and CVAE approaches. The improvements are in line with improvements observed in Kendall & Gal (2017). Our Bayes-WD and Bayes-WD-SL further improves upon the approach of Kendall & Gal (2017). Furthermore, note that the differences are significant in the important regions: [0.0,0.2] and [0.6,0.95]. Other methods underestimate the probability of occurrence of classes in the region [0.0,0.2] (they fail to predict certain classes which potentially occur in special cases) and overestimate in the region [0.6,0.95] (they are overly confident of likely classes).\", \"We have corrected the typos that were pointed out. Finally, we thank AnonReviewer2 for voicing her/his concerns and helping us improve our work. We would be happy to answer any remaining questions.\"]}",
"{\"title\": \"Clarifications provided. [2/2]\", \"comment\": [\"In Table 2 it is not clear what is compared against what.\", \"Here, we compare our proposed model against four important baselines - two state-of-the-art approaches of Luc et al. (2017) and Seyed et al. (2018), the standard dropout approach of Kendall & Gal (2017) (Bayes-S, using the same model architecture as our Bayes-WD-SL - Figure 2) and a baseline that uses Weight Dropout (Bayes-WD) but not synthetic likelihoods. We have updated the Table 2 at the additional time-steps as requested.\", \"The Bayes-WD-SL does exactly on par with the Bayes-Standard.\", \"We have added the additional results for +0.54sec in Table 2. We see that the mean prediction from the Bayes-WD-SL model has an advantage of 2.9 mIoU at 0.06sec and 0.2 mIoU at 0.54secs. The performance advantage of Bayes-WD-SL over Bayes-S(Standard) in this case shows that the ability to better model uncertainty does not come at the cost of lower mean performance. At larger time-steps as the future becomes increasingly uncertain, mean predictions (mean of all likely futures) drift further from the ground-truth. Therefore, we evaluate the models on their (more important) ability to capture the uncertainty of the future in Table 2 (last two rows) and Table 3 (full). We see clearly in Table 3, that Bayes-WD-SL shows a large performance advantage over the Bayes-S(Standard) model in capturing uncertainty with respect to both the Top 5% of predictions and the CLL metric (same criteria as in Lee et al. (2017) and Bhattacharyya et al. (2018b)). This shows that Bayes-WD-SL model can much better capture the range of likely futures. Similarly, the Bayes-WD also performs better compared to the Bayes-Standard model. This shows the advantage of both our novel components - Weight Dropout and (more significantly) Synthetic Likelihoods in the ability to capture uncertainty. We have updated the text in Section 4.2 to highlight these points.\", \"Fair comparison to Luc et al. (2017)\", \"In Table 1, we use a PSPNet to generate training segmentations for our Bayes-WD-SL model to ensure fair comparison with the state-of-the-art Seyed et al. Note, that our Bayes-WD-SL model already obtains higher gains in comparison to Luc et al. with respect the Last Input Baseline, e.g. at +0.54sec, 47.8 - 36.9 = 10.9 mIoU translating to 29.5% gain over the Last Input Baseline of Luc et al. versus 51.2 - 38.3 = 12.9 mIoU translating to 33.6% gain over the Last Input Baseline of our Bayes-WD-SL model in Table 1. But as suggested for fairness, we additionally include results in Table 6, Appendix C using the same Dialation 10 approach to generate training segmentations. We observe that our Bayes-WD-SL model beats the model of Luc et al. (2017) in both short-term (+0.18 sec) and long-term predictions (+0.54 sec). Furthermore, we see that the mean of the Top 5% of the predictions of Bayes-WD-SL leads to much improved results over mean predictions. This again confirms the ability of our Bayes-WD-SL model to capture uncertainty and deal with multi-modal futures.\", \"Unclarities.\", \"We have defined Z_K in eq. (4) in the text.\", \"In eq (6) is the z x \\u03c3 a matrix or a scalar operation?\", \"It is a matrix operation as z is vector. We have clarified this in the text.\", \"Convolutions are not used in the first experiment?\", \"We do not use convolutions in the first experiment as we want to highlight the advantage of synthetic likelihoods in modelling uncertainty. The results of the experiment show that synthetic likelihoods are applicable across model types and can successfully model uncertainty. Note that we use weight dropout (our first novelty) in the main experiments on street scenes.\", \"Finally, we thank AnonReviewer1 for voicing her/his concerns and helping us improve our work. We would be happy to answer any remaining questions.\"]}",
"{\"title\": \"Clarifications provided. [1/2]\", \"comment\": [\"We would like to thank AnonReviewer1 for finding our work interesting especially the novel ideas of Weight Dropout and Synthetic Likelihoods in a Bayesian context.\", \"We now address the concerns and provide clarifications in detail,\", \"The proposed model is a auto-encoder GAN.\", \"We would like to clarify that our model is not an auto-encoder GAN. Although,\", \"the objective function (11) does bear resemblance to the objective typically minimized by auto-encoder GANs. But, we do not use auto-encoders -- our models do not have explicit latent spaces. More importantly, unlike auto-encoder GANs which learn one single model for prediction, we learn the distribution of likely models in a Bayesian framework.\", \"It is not clear where the architecture of section 3.4 is used.\", \"We would like to clarify that this is the main architecture for all the street scene prediction experiments -- used by the ResG-Mean, Bayes-S, Bayes-WD and Bayes-WD-SL models. The architecture of section 3.4 is one and the same as the ResNet architecture. We have clarified this in the text.\", \"Why considering the mean of the best 5% predictions helps with evaluating the predicted uncertainty?\", \"As the future becomes increasingly uncertain, we would like to capture all the likely future outcomes rather than just the mean, as mean predictions drift further and further from the groundtruth into the future. As mentioned in the main paper, considering the mean of (oracle) best 5% of predictions helps us evaluate whether the learned model distribution contains likely models corresponding to the groundtruth. As this metric (also used by e.g. Lee et al. (2017) and Bhattacharyya et al. (2018b)) is also averaged across the test set, a higher value shows that the model distribution is able to better anticipate all the varied futures that occur in the test set.\", \"The precise novelty of the first modification is not clearly explained.\", \"The advantage of our proposed Weight Dropout scheme does depend on the spatial resolution. The gain at a certain layer is significant if the spatial resolution is greater than the product of the number of filters of in the current layer and the previous layer. We gain ~28x compared to standard (patch) dropout in the first and last group of layer of our model (Figure 2) and ~3x times in the second and fourth group of layers. While we do not gain in the third group of layers -- where the spatial resolution is the lowest, overall we gain ~3x compared to Standard Dropout. Thus, while it is true that the gains are mostly when the spatial resolution is high (typically more than 32x64). But, as the total number of parameters are also the highest when the spatial resolution is high, overall we make significant gains in both the convolutional architectures used for street scene and precipitation forecasting -- we have 68% lower total number of variational parameters compared to the Standard Dropout scheme of Gal & Ghahramani (2016a). We have added Table 11 Appendix E, where we discuss the gains at each layer in detail for the architecture on street scene prediction. Finally, in Table 12 we provide an overview of the variational parameters of both architectures for street scene and precipitation forecasting .\", \"MNIST experiment reports results only from the best sampled model.\", \"Note that we use a classifier to decide if the prediction comes from the correct mode. This is because it is difficult to decide with simple measures like L1/2 distance whether the prediction is even a coherent digit. We use the best sampled model to maintain the crispness of the predicted images. Taking e.g. the mean of top 10 of 100 models, would lead to the generation of blurry digits potentially unrecognizable by a classifier. Furthermore, this same criteria is applied across all models, therefore, we do not give any unfair advantage to our model. Furthermore, we have updated Figure 3 to report the mean and standard deviation of the Top 10% metric across 10 splits of 1000 examples from the MNIST test set. The results confirm the advantage of our Bayes-SL model over the Bayes-S and CVAE models.\"]}",
"{\"title\": \"Some interesting ideas, clarifications needed\", \"review\": [\"The work proposes a Bayesian neural network model that is a hybrid between autoencoders and GANs, although it is not presented like that. Specifically, the paper starts from a Bayesian Neural Network model, as presented in Gal and Ghahramani, 2016 and makes two modifications.\", \"First, it proposes to define one Bernoulli variational distribution per weight kernel, instead of per patch (in the original work there was one Bernoulli distribution per patch kernel). As the paper claims, this reduces the complexity to be exponential to the number of weights, instead of the number of patches, which leads to a much smaller number of possible models. Also, because of this modification the same variational distributions are shared between locations, being closer to the convolutional nature of the model.\", \"The second modification is the introduction of synthetic likelihoods. Specifically, in the original network the variational distributions are designed such that the KL-divergence of the true posterior p(\\u03c9|X, y) and the approximate posterior q(\\u03c9) is minimiezd. This leads to the optimizer encouraging the final model to be close to the mean, thus resulting in less diversity. By re-formulating the KL-divergence, the final objective can be written such that it depends on the likelihood ratio between generated/\\\"fake\\\" samples and \\\"true\\\" data samples. This ratio can then be approximated by a GAN-like discriminator. As the optimizer now is forced to care about the ratio instead of individual samples, the model is more diverse.\", \"Both modifications present some interesting ideas. Specifically, the number of variational parameters is reduced, thus the final models could be much better scaleable. Also, using synthetic likelihoods in a Bayesian context is novel, to the best of my knowledge, and does seem to be somewhat empirically justified.\", \"The negative points of the paper are the following.\", \"The precise novelty of the first modification is not clearly explained. Indeed, the number of possible models with the proposed approach is reduced. However, what is the degree to which it is reduced. With some rough calculations, for an input image of resolution 224x224, with a kernel size of 3x3 and stride 1, there should be about 90x90 patches. That is roughly a complexity of O(N^2) ~ 8K (N is the number of patches). Consider the proposed variational distributions with 512 outputting channels, this amount to 3x3x512 ~ 4.5K. So, is the advantage mostly when the spatial resolution of the image is very high? What about intermediate layers, where the resolution is typically smaller?\", \"Although seemingly ok, the experimental validation has some unclarities.\", \"First, it is not clear whether it is fair in the MNIST experiment to report results only from the best sampled model, especially considering that the difference from the CVAE baseline is only 0.5%. The standard deviation should also be reported.\", \"In Table 2 it is not clear what is compared against what. There are three different variants of the proposed model. The WD-SL does exactly on par with the Bayes-Standard (although for some reason the boldface font is used only for the proposed method. The improvement appears to come from the synthetic likelihoods. Then, there is another \\\"fine-tuned\\\" variant for which only a single time step is reported, namely +0.54 sec. Why not report numbers for all three future time steps? Then, the fine-tuned version (WD-SL-ft) is clearly better than the best baselines of Luc et al., however, the segmentation networks are also quite different (about 7% difference in mIoU), so it is not clear if the improvement really comes from the synhetic likelihoods or from the better segmentation network. In short, the only configuration that appears to be convincing as is is for the 0.06 sec. I would ask the authors to fill in the blank X spots and repeat fair experiments with the baseline.\", \"Generally, although the paper is ok written, there are several unclarities.\", \"Z_K in eq. (4) is not defined, although I guess it's the matrix of the z^{i, j}_{k, k'}\", \"In eq (6) is the z x \\u03c3 a matrix or a scalar operation? Is z a matrix or a scalar?\", \"The whole section 3.4 is confusing and it feels as if it is there to fill up space. There is a rather intricate architecture, but it is not clear where it is used. In the first experment a simple fully connected network is used. In the second experiment a ResNet is used. So, where the section 3.4 model used?\", \"In the first experiment a fully connected network is used, although the first novelty is about convolutions. I suppose the convolutions are not used here? If not, is that a fair experiment to outline the contributions of the method?\", \"It is not clear why considering the mean of the best 5% predictions helps with evaluating the predicted uncertainty? I understand that this follows by the citation, but still an explanation is needed.\", \"All in all, there are some interesting ideas, however, clarifications are required before considering acceptance.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting improvement to dropout based Bayesian inference\", \"review\": \"The submission considers a disadvantage of a standard dropout-based Bayesian inference approach, namely the pessimization of model uncertainty by means of maximizing the average likelihood for every data sample. The formulation by Gal & Ghahramani is improved upon two-fold: via simplified modeling of the approximating variational distribution (on kernel/bias instead of on patch level), and by using a discriminator (i.e. classifier) for providing a \\\"synthetic\\\" likelihood estimate. The latter relaxes the assumptions such that not every data sample needs to be explained equally well by the models.\\nResults are demonstrated on a variety of tasks, most prominently street scene forecasting, but also digit completion and precipitation forecasting. The proposed method improves upon the state of the art, while more strongly capturing multi-modality than previous methods.\\n\\nTo the best of my knowledge, this is the first work w.r.t. future prediction with a principled treatment of uncertainty. I find the contributions significant, well described, and the intuition behind them is conveyed convincingly. The experiments in Section 4 (and appendix) yield convincing results on a range of problems.\\nClarity of the submission is overall good; Sections 3.1-3.3 treat the contributions in sufficient detail. Descriptions of both generator and discriminator for street scenes (Section 3.4) are sufficiently clear, although I would like to see a more detailed description of the training process (how many iterations for each, learning rate, etc?) for better reproducability.\\nIn Section 3.4, it is not completely clear to me why the future vehicle odometry is provided as an input, in addition to past odometry and past segmentation confidences. I assume this would not be present in a real-world scenario? I also have to admit that I fail to understand Figure 4; at least I cannot see any truly significant differences, unless I heavily zoom in on screen.\", \"small_notes\": [\"Is the 'y' on the right side of Equation (5) a typo? (should this be 'x'?)\", \"The second to last sentence at the bottom of page 6 (\\\"Always the comparison...\\\") suffers from weird grammar\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"a nice applied work\", \"review\": \"The paper presents an application of Bayesian neural networks in predicting\\nfuture street scenes. The inference is done by using variational approximation \\nto the posterior. Moreover, the authors propose to using a synthetic (approximate)\\nlikelihood and the optimization step in variational approxiation is based on a regularization.\\nThese modifications are claimed by the authors that it yields a better results in practice \\n(more stable, capture the multi-modal nature). Numerical parts in the paper support\\nthe authors' claims: their method outperforms some other state-of-the-art methods.\\n\\nThe presentation is not too hard to follow.\\nI think this is a nice applied piece, although I have never worked on this applied side.\", \"minor_comment\": \"In the second sentence, in Section 3.1, page 3, \\n$f: x \\\\mapsto y$ NOT $f: x \\\\rightarrow y$. \\nWe use the \\\"\\\\rightarrow\\\" for spaces X,Y not for variables.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
S1eK3i09YQ | Gradient Descent Provably Optimizes Over-parameterized Neural Networks | [
"Simon S. Du",
"Xiyu Zhai",
"Barnabas Poczos",
"Aarti Singh"
] | One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an $m$ hidden node shallow neural network with ReLU activation and $n$ training data, we show as long as $m$ is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function.
Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods. | [
"theory",
"non-convex optimization",
"overparameterization",
"gradient descent"
] | https://openreview.net/pdf?id=S1eK3i09YQ | https://openreview.net/forum?id=S1eK3i09YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HklkmjKDfB",
"rJgd1KdCJV",
"HygVLQBmkV",
"HJeK5b6f14",
"ryeydjfWJE",
"S1es44k-JN",
"SkxPEuIhCQ",
"BkgphZviC7",
"B1lTu7zsC7",
"BylyV4KqRX",
"H1llXbvcC7",
"Skeo1ukt07",
"rJgDnCeICm",
"rygEFV6l0m",
"HylJ7E6xAm",
"B1l4CQpeAm",
"HJg5_7pe0m",
"HJlyYM6gCQ",
"rJeQQfalRm",
"BkxSoWagAm",
"r1gzcd-5aX",
"BJeygSVrTm",
"HygOMNNqhQ",
"H1g-pI4Dhm",
"S1eU0bxSn7",
"SklA8vtVh7",
"HklcRVD4hm",
"S1xpytvfnX",
"Hkgk1YwgnX",
"HklKuGwacm",
"ryxgnoz357",
"SkgYjPMn97",
"SklNmL-nq7",
"BkeKdmWn9Q",
"B1gUqYQK5m",
"H1gUINXtqX",
"SkxQT2fFcQ",
"HygpnikO9X",
"r1el9Jqv5X",
"SyemVAXv9Q",
"SJeNOPOU9Q",
"HJei63YVq7"
],
"note_type": [
"comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"comment",
"comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1564085014570,
1544616160329,
1543881547844,
1543848337021,
1543740263277,
1543726130729,
1543428142949,
1543365044559,
1543345012745,
1543308326821,
1543299352116,
1543202786697,
1543012015272,
1542669435774,
1542669334789,
1542669260023,
1542669170364,
1542668918860,
1542668827010,
1542668701144,
1542228105750,
1541911783429,
1541190671576,
1540994745068,
1540846029945,
1540818774309,
1540809937826,
1540679909249,
1540548823051,
1539302000924,
1539218344511,
1539217313221,
1539212827846,
1539212145129,
1539025293891,
1539023949660,
1539022010797,
1538943924890,
1538920327819,
1538895403077,
1538848620179,
1538723011182
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/AnonReviewer4"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper729/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper729/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper729/AnonReviewer2"
],
[
"(anonymous)"
],
[
"~Olivier_Grisel1"
],
[
"~Olivier_Grisel1"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
],
[
"~Danica_J._Sutherland1"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
],
[
"~Danica_J._Sutherland1"
],
[
"~Danica_J._Sutherland1"
],
[
"(anonymous)"
],
[
"~Hongyi_Zhang1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper729/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"Thank authors for the interesting work. We found some parts of proof of Lemma 3.1 (page 15) to be confusing. Any clarification is greatly appreciated!\\n\\n(a) Could anybody please elaborate on how to derive 2nd inequality formula from 1st inequality formula in Proof of Lemma 3.1? It writes \\\"Setting \\u03b4\\u2032 = n^2 \\u03b4 and applying union bound over (i,j) pairs\\\", but how does the upper bound\\n 2 \\\\sqrt{log(1 / \\u03b4')} / \\\\sqrt{m}\\nrelax to\\n 4 \\\\sqrt{log(n / \\u03b4)} / \\\\sqrt{m}\\nby setting \\\"\\u03b4' = n^2\\\"? \\n\\n(b) Besides, we also find it confusing to derive the 1st inequality by Hoeffding's inequality. The paper writes: with probability 1 - \\u03b4', we have\\n |H_{ij}(0) - H_{ij}^\\u221e| \\u2264 2 \\\\sqrt{log (1 / \\u03b4')} / \\\\sqrt{m}.\\n\\nBut using the Hoeffding's inequality (as formulated in Corollary 7 of lecture note [1]), it derives an upper bound to be\\n \\\\sqrt{log(2 / \\u03b4')} / \\\\sqrt{2m}\\ninstead of the\\n 2 \\\\sqrt{log (1 / \\u03b4')} / \\\\sqrt{m}.\\n\\nAre we wrong anywhere in understanding the proof? Thanks a lot in advance.\\n\\n\\n[1] http://www.stat.cmu.edu/~larry/=stat705/Lecture2.pdf\", \"title\": \"Proof of Lemma 3.1\"}",
"{\"metareview\": \"This paper proves that gradient descent with random initialization converges to global minima for a squared loss penalty over a two layer ReLU network and arbitrarily labeled data. The paper has several weakness such as, 1) assuming top layer is fixed, 2) large number of hidden units 'm', 3) analysis is for squared loss. Despite these weaknesses the paper makes a novel contribution to a relatively challenging problem, and is able to show convergence results without strong assumptions on the input data or the model. Reviewers find the results mostly interesting and have some concerns about the \\\\lambda_0 requirement. I believe the authors have sufficiently addressed this issue in their response and I suggest acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"ICLR 2019 decision\"}",
"{\"comment\": \"Though assuming w fixed and only randomness of w(0), the random event still depends on w. I believe Lemma 3.2 actually proved that Prob[H(w) eigenvalues are lower bounded]>1 -delta, for any fixed w. But what is used in the latter proof seems to be Prob[for any fixed w, H(w) eigenvalues are lower bounded]>1 -delta.\\n\\nThink the following simple example. If Z is N(0,1), then E[(Z-1)^2] = 2 and E[(Z+1)^2] = 2. By Markov inequality, Prob[(Z-1)^2 >2/delta]<delta, Prob[(Z+1)^2 >2/delta]<delta, but '(Z-1)^2 >2/delta' and '(Z+1)^2 >2/delta' are certainly different random events.\", \"title\": \"Random event independent of t but still depend on choice of w\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for increasing your score! We will fix the typo in our final version.\"}",
"{\"title\": \"I believe the revision addressed my concerns\", \"comment\": \"The revised lemma is much clearer than the initial version.\\n\\nThe proof of Lemma 3.2 only uses the randomness of w_i(0) s, and the result holds for any weight vectors satisfying the distance assumption in the Lemma, including the setting where the weight vectors are random and dependent on w_i(0) s.\", \"typo\": \"In the first line of Lemma 3.2, 'w_1, ..., w_m' should be 'w_1(0), ..., w_m(0)'.\\n\\nI have adjusted my score accordingly.\"}",
"{\"comment\": \"This paper seems to be one of the most popular papers in ICLR though. It got a lot of attention on social media as well as in academia. The impact of the paper is definitely huge as it's closely correlated with the popularity.\", \"title\": \"One of the best papers in ICLR\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thanks for your encouraging comments!\"}",
"{\"comment\": \"This paper seems to be an interesting and important paper for neural networks theory. It gets rid of the distributional input assumption common in previous works. It also gives a linear convergence rate which could not be made possible solely by landscape analysis.\\n\\nIn the analysis of this paper, the H^infty matrix appears naturally and seems to reveal a connection between neural networks and kernels. Moreover, I would like to mention that the ideas presented in the current submission have recently been generalized to deal with multi-layer neural networks, which clearly illustrates the potential its proof structure and techniques.\", \"title\": \"Interesting paper and recent follow-ups\"}",
"{\"title\": \"Response to Additional Review\", \"comment\": \"Thanks for your thorough reading! We will add more discussions on non-differentiability!\\n\\n1. Proof idea and experiments:\\nWe think viewing our proof from a \\\"noisy\\\" linear regression perspective is an interesting observation. Indeed, analyzing a hard non-linear problem from a \\\"linear\\\" perspective is a common practice in mathematics.\\n\\nIn our proof, R' < R is a sufficient condition to show most patterns do not change, which we have verified in Figure 1 (b). It is possible that through other types of analysis, one can show most patterns do not change. For Figure 1 (c), we just want to verify that as $m$ becomes larger, the maximum distance becomes smaller.\\n\\n2. Network size.\\nWe have discussed this point many times in the response. \\nOur current bound requires m = \\\\Omega(n^6). In this paper, to present the cleanest proof, we only use the simplest concentration inequalities (Hoeffding and Markov). We do not think this bound is tight, and we believe using more advanced techniques from probability theory, this bound can be tightened. \\n\\n3. Dependency on lambda_0.\\nFirst of all, your example is not valid in our setting. If x=0, y=1, it is not possible that ReLU(w*x) can achieve zero training error. \\nFurthermore, it is easy to prove linear convergence for your example because we can just study the Gram matrix defined over other data points, which has a positive lambda_0. \\nWe will add a remark about this in our final version. Thanks for pointing out.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thanks for your clarifications and we are happy to address your concerns.\\n\\n1. On H^{\\\\infty} and \\\\lambda_0. \\nWe have discussed this in length in our Response to Common Questions and Summary of Revisions. In short, because Equation (7) is an equality, at least in the large $m$ regime, $H^{\\\\infty}$ determines the whole optimization dynamics, and as a consequence, $\\\\lambda_0$ is the correct complexity measure. See more discussions in Remark 3.1. \\n\\nWe are not hiding the difficulty because we have identified the correct complexity measure. We believe it is indeed an interesting problem about how the spectrum of $H^{\\\\infty}$ is related to other assumptions on the training data. We will list this problem in the Discussion section in our final version.\\n\\nAs a side note, before this paper, even if we allow $m$ to be exponential in $n$, there is no analysis showing that randomly initialized gradient descent can achieve zero training loss.\\n\\n2. On discrete time analysis.\\nOur discrete time analysis follows closely to the continuous time analysis. Note we analyze $u(k+1) \\u2013 u(k)$ which is analog to $du/dt$. Furthermore, in the equation in the middle of page 9, in the third equality, we decompose the loss at the (k+1)-th iteration into several terms. Note the second term just corresponds to $d(\\\\|y-u(t)\\\\|_2^2)/dt$ in the proof of Lemma 3.3 and the other terms are perturbations terms due to discretization. We will make the connection between continuous time analysis and discrete time analysis clearer in our final version. Thanks for pointing out!\"}",
"{\"comment\": \"Thanks so much for your response. I would like to clarify my concerns.\\n\\n1. I mean the number of hidden nodes m will depend on \\\\lambda_0.\\n\\n2. The current paper fails to give an explicit relationship between \\\\lambda_0 and n, thus the requirement of m may be meaningless. What if this dependence is exponential? The authors should at least prove that the dependence of \\\\lambda_0 on n is polynomial under some more natural assumptions on data distribution. If this cannot be proved, does it imply that this eigenvalue lower bound assumption hides the major difficulty of this problem?\\n\\n3. In the current paper, the authors provide (i) continuous time convergence result (ii) discrete time convergence result (iii) discussion on how the proof method for continuous case can be generalized to deep networks. However, the connection between the continuous time analysis and the discrete time analysis is unclear in the current paper. It seems that the current discrete time analysis is not really a discretization of the continuous time proof, and the proof method looks independent of the continuous time analysis. As a result, it is unclear if the current discrete time analysis can provide enough insight on the training of deep networks, especially since the non-smoothness of ReLU activation function is one of the major difficulties.\", \"title\": \"Further discussion\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your comments. However, we disagree with your comments.\\n\\nFirst, this paper is only about the training error, so we are confused why you talked about sample complexity.\\n\\nSecond, you wrote, \\u201c\\\\lambda_0 can be extremely small in the case of deep networks\\u201d. However, you did not give any concrete evidence about this claim.\\n\\nThird, we are confused about why using continuous analysis to gain intuition is a wrong approach. Many previous papers used this approach to analyze convex optimization problems and deep learning optimization problems [1,2,3,4].\\n\\nFourth, you wrote \\u201cthe discrete analysis based on some loss concentration bounds, which may lead to meaningless results for deep networks.\\u201d Again, you did not give any concrete evidence on why for deep networks our analysis will be meaningless, and we are confused about what are the \\u201closs concentration bounds\\u201d you are referring to. \\n\\n[1] Ashia C Wilson, Benjamin Recht, and Michael I Jordan. A Lyapunov analysis of momentum methods in optimization. arXiv preprint arXiv:1611.02635, 2016.\\n[2] Zhang, J., Mokhtari, A., Sra, S., & Jadbabaie, A. (2018). Direct Runge-Kutta discretization achieves acceleration. arXiv preprint arXiv:1805.00521.\\n[3] S Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509, 201\\n[4] Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. arXiv preprint arXiv:1806.00900, 2018.\"}",
"{\"comment\": \"Although this paper provides a theoretical guarantee for one hidden layer ReLU based neural networks, the proposed analysis seems very limited, and I\\u2019m wondering whether this analysis can give us some insights for analyzing deep networks to get meaningful results.\\n\\nIn detail, the lower bound assumption of H will introduce a quantity \\\\lambda_0 into the dependence of m. This quantity can be extremely small in the case of deep networks, which gives us meaningless requirement of the number of hidden nodes. Most part of the current paper discuss about the continuous time analysis. However, this kind of analysis can get rid of the smoothness requirement of the loss function, which is one of the biggest challenges for analyzing ReLU based networks. In addition, the discrete time analysis is based on some loss concentration bounds, which may lead to meaningless results for deep networks. \\n\\nI think the proposed analysis of the current paper looks very limited.\", \"title\": \"The impact of the current paper looks very limited\"}",
"{\"title\": \"Thanks for your experiments\", \"comment\": \"Thanks, Olivier.\\n\\nWe have acknowledged your experiments in our response.\"}",
"{\"title\": \"Thanks for your suggestion!\", \"comment\": \"We thank for your suggestion. We have changed \\\"non-degenerate data\\\" to \\\"no parallel inputs\\\".\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank for your careful review.\", \"we_have_modified_our_draft_according_to_your_suggestions\": \"\\u2022 We changed the statement of Lemma 3.2, and now it is independent of t.\\n\\u2022 We have added more discussions on how to generalize our technique to analyze multiple layers. In the conclusion section, we have described a concrete plan for analysis. \\n\\u2022 For all theorems and lemmas, we have added failure probability and how the amount of over-parameterization depends on this failure probability. \\n\\u2022 We have fixed the typos.\\n\\u2022 We have modified the statement of Theorem 3.1, 4.1 and the proof of Lemma 3.2 according to your suggestions. \\n\\nRegarding your question on how our insights could help practitioners in the future since we have characterized the convergence rate of gradient descent from the Gram matrix perspective, we believe our insights can inspire practitioners to design faster optimization algorithms from this perspective. \\n\\nWe kindly ask you to read our revised paper and our response to common questions and re-evaluate your comments. \\n\\nWe thank the reviewer again and welcome all further comments.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank for your long review. Unfortunately, we disagree with most of your comments. First, we would like to point out two wrong statements in your review.\\n\\nFirst, the \\u201cresult\\u201d you claim is wrong. If the first layer is fixed and m = \\\\Omega(n \\\\logn), and only a=(a_1,\\u2026, a_m) is being optimized, this is a linear regression problem with respect to a=(a_1,\\u2026, a_m). Since m > n, this problem has more features than the number of samples, and the covariance matrix (Hessian) is degenerate. There is no way this problem is a strongly convex one.\\n\\nSecond, you claimed there exists a linearly separable dataset whose corresponding H^{\\\\infty} is degenerate. However, we are considering a regression problem whereas linearly separable condition is only a favorable condition for classification problems. We don\\u2019t understand what does linearly separable mean for regression.\", \"now_regarding_your_main_complaint_that_the_problem_is_not_difficult_enough\": \"1. This is not true at all. Reviewer #1 and Reviewer #2 both explicitly agreed this is a challenging/difficult problem and we have devoted a whole paragraph (second paragraph on page 2) and many sentences in Section 2 to describe the difficulty. \\n\\n\\n2. You complained that we are not analyzing the landscape of this non-differentiable function and we are using the \\u201cpractically used update rule instead of subgradient.\\u201d We don\\u2019t understand the point here. Our primary goal is to understand why practically used rule (gradient descent) can achieve zero training loss. We have stated our goal at the beginning of the abstract and the introduction. For the non-differentiability issue, in the revised version we have cited papers and added discussions in the fourth paragraph of Section 2 on recent progress in dealing with non-differentiability. \\n\\n\\n3. You claimed fixing one layer and optimizing the other one is a trivial problem. We agree if one fixes the first layer and optimizes the output layer, then this is trivial because this is a convex problem. However, if one fixes the output layer and optimizes the first layer, the problem is significantly harder. You claimed in this case \\n\\n\\u201cthe loss function is a weakly global function. This means that the loss function is similar to a convex function except those plateaus and this further indicates that if the initial point is chosen in a strictly convex basin, the gradient descent is able to converge to a global min. \\u201d\\n\\nWe kindly ask for a reference and why it can imply the global convergence of gradient descent analyzed in our paper. To our knowledge, none of the previous results implies the global convergence of gradient descent in the setting we are analyzing. We have discussed this point in Section 2. Furthermore, we have never heard of the notion \\u201cweakly global function\\u201d. \\n\\n4. You believed that the inputs are generated from a unit sphere is a strong assumption. In our original version, we said making this assumption is only for simplicity. In our revised version, we added more details on this assumption. Please check footnote 7. \\n\\n5. For your other concerns, we kindly ask you to read our response to common questions. \\n\\n\\nWe thank the reviewer again. We welcome all further comments!\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank for your careful and encouraging review. We believe our revised version has addressed most of your concerns.\\n\\n1. We have added discussions on the problem of fixing the first layer and only training the output layer in footnote 3. We believe the learned function is different from the function learned by fixing the output layer and only training the first layer. We would also like to point out that many previous papers considered the same setting but did not rigorously prove the global convergence of gradient descent. \\n\\n2. We have added a new theorem (Theorem 3.3) which shows applying gradient flow to optimize all variables still enjoys a linear convergence rate. To prove Theorem 3.3, we use the same arguments as we used to prove Theorem 3.1 with slightly more calculations. Therefore, we have shown analyzing the case that both layers are trained is just as hard as analyzing the case where only the first layer is trained.\\n\\n3. We have added a new theorem (Theorem 3.1) which shows as long as no two inputs are parallel, H^{\\\\infty} is non-degenerate. \\n\\nWe thank the reviewer again. We welcome all further comments!\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank for your encouraging review.\", \"we_have_modified_our_paper_according_to_your_suggestions\": \"\\u2022\\tWe fixed lemma 3.2.\\n\\u2022\\tWe added a new theorem (Theorem 3.1) showing the non-degeneracy of H^{\\\\infty} matrix.\\n\\u2022\\tWe also added some experiments to corroborate our theoretical findings. Indeed, most of the patterns of ReLUs do not change. Furthermore, over-parameterization leads to faster convergence rate.\\n\\nWe thank the reviewer again. We welcome all further comments!\"}",
"{\"title\": \"Response to Common Questions and Summary of Revisions\", \"comment\": \"Dear reviewers,\\n\\nWe thank for all your comments. Especially all reviewers agree our proof is simple. Here we address some common questions from reviews and other comments.\\n\\n1. H^{\\\\infty} matrix. \\nMany comments asked when the least eigenvalue of H^{\\\\infty} is strictly positive and what is the intuition of H^{\\\\infty} matrix. We thank Dougal J Sutherland and Olivier Grisel for providing numerical evidence showing that on real datasets, this quantity is indeed strictly positive. \\n\\na. Theoretically, in our revised version, we give a theorem (c.f. Theorem 3.1) which shows if no two inputs are parallel, then the H^{\\\\infty} is full rank and thus it has a strictly positive eigenvalue. \\n\\nb. Here we also want to discuss informally on why we think H^{\\\\infty} is the fundamental quantity that determines the convergence rate. In Equation (7), the time derivative of the predictions u(t) is EQUAL to -H(t) (y-u(t)), i.e., the dynamics of the predictions is completely determined by H(t). Furthermore, in our analysis, we show if m -> \\\\infty, H(t) -> H^{\\\\infty} for all t >0. Therefore, the worst case scenario is that at the beginning y-u(0) is in the span of the eigenvector of H^{\\\\infty} that corresponds to the least eigenvalue of H^{\\\\infty}. In this case, y-u(t) will stay in this space and by one-dimensional linear ODE theory, we see that y-u(t) converges to 0 at a rate exp(-\\\\lambda_0 t). Also, see Remark 3.1.\\n\\n\\n\\n2. Why fixing the output layer and only training the first layer? The analysis will be much harder if one trains both the first and the output layer.\\nThis is the concern raised by Reviewer 2 and Reviewer 3. In our original version, we only analyzed the convergence of gradient optimizing the first layer because we believe this problem already demonstrated the main challenge as many previous works tried to understand the same problem, but none of them has a polynomial time convergence guarantee towards zero training loss. For reviewers\\u2019 concern: \\n\\na. First, we disagree with Reviewer 3 that analyzing the case that only the first layer is trained is a trivial problem. For the same setting, there are many previous attempts to answer this question, but these results often rely upon strong assumptions on the labels and input distributions or do not imply why randomly initialized first order method can achieve zero training loss. Please see the second paragraph on Page 2 and Section 3 for detailed discussions. \\n\\nb. Second, if we fix the first layer and only train the second layer, the learned function is different from the function learned by fixing the second layer and training the first layer. We have added this point in footnote 3.\\n\\nc. Lastly, in our revised version, we added a new theorem (c.f. Theorem 3.3) which shows using gradient flow to train both layers jointly, we can still enjoy linear convergence rate towards zero loss. To prove Theorem 3.3, we use the same arguments as we used to prove Theorem 3.1 with slightly more calculations. Therefore, we have shown analyzing the case that both layers are trained is just as hard as analyzing the case where only the first layer is trained. \\n\\n\\n3. Amount of over-parameterization. \\nOur current bound requires m = \\\\Omega(n^6). In this paper, to present the cleanest proof, we only use the simplest concentration inequalities (Hoeffding and Markov). As we discussed in the conclusion section, we do not think this bound is tight, and we believe using more advanced techniques from probability theory, this bound can be tightened. \\n\\n\\n\\n4. Lemma 3.2. \\nWe are sorry about the confusion in the statement in our original version. We have changed the statement, and the new statement is independent of t. \\n\\n\\n\\n5. Extending to more layers. \\nWe have added more discussions in the conclusion section on how to extend our analysis to deeper neural networks, including a very concrete plan. In short, for deep neural networks, we can also consider the dynamics of the n predictions, and the dynamics are determined by the summation of H (number of layers) Gram matrices. We conjecture that 1) at the initialization phase as m -> \\\\infty, the summation converges to a fixed n by n matrix and 2) as m -> \\\\infty, these matrices do not change by much over iterations. Thus, as long as the least eigenvalue of that fixed matrix is strictly positive and m is large enough, we can still have linear convergence for deep neural networks.\", \"summary_of_revisions\": \"1. We add a new theorem (Theorem 3.1) which shows as long as no two inputs are parallel, H^{\\\\infty} matrix is non-degenerate.\\n2. We add a new theorem (Theorem 3.3) on the convergence of gradient flow for jointly training both layers.\\n3. We add experimental results to verify our theoretical findings. \\n4. More discussions on how to extend our analysis to more layers and why H^{\\\\infty} is a fundamental quantity.\"}",
"{\"title\": \"Review\", \"review\": \"Additional Review\\n\\nThis paper did NOT handle the non-differentiability and non-linearity very well. We can see this from the following three perspectives:\\n\\n1. Proof idea: the proof of this paper is noisy version of the convergence analysis of a simple convex problem --it treats the contribution of the non-linearity and non-differentiability as bounded noise.\\n2. The network size is of order n^6.\\n3. Network size requirement is dependent on \\\\lambda_0. \\n\\n1.Proof idea: The proof is essentially a noisy version of the convergence analysis of a linear regression problem provided in Appendix (at the end of this updated review). The only difference between linear regression and the problem in this paper is the changing patterns due to the non-linearity of ReLU. However, this paper views the changing patterns as noises compared to those unchanging patterns (e.g., S_i v.s. S_i^\\\\perpendicular). The key trick is that if the actual trajectory radius (i.e.,the largest deviation from the initial point) R\\u2019 is much smaller than the desired trajectory radius R (given by a formula), then along the trajectory, the contribution of non-linearity is just O(n^2 R), which is small compared to the contribution of linearity, i.e., -\\\\lambda_0 (shown in proof on page 9). \\n\\nFollowing the above analysis, if the experiment shows that R\\u2019 is really small compared to R, then the approach of treating non-linearity as noise is fine. However, it is not the case for the problem studied in the experiments (Sec 5, Fig 1). In figure 1, we can easily see that the maximum distance R\\u2019 is O(1), which is far larger than R = c*\\\\lambda_0/n^2 =10^-6 when n=1k. Therefore, the proof idea used in this paper is fundamentally not able to explain the phenomenon shown in the experiment. In fact, to address this issue, authors need to consider significant contribution of non-linearity, instead of just viewing them as noises. \\n\\n2. The network size is too large. This paper requires O(n^6) neurons, that is 10^18 neurons for n=1000 samples used in the experiment. The theoretical trick to make R\\u2019< R is to note that R\\u2019 can be bounded by O(1/sqrt{m}) while R is independent of m, thus picking a sufficiently large m can make R\\u2019 very small. In a word, the reason that this paper requires so many neurons because of the inability of properly addressing non-linearity. \\n\\n3. I found the dependence of the network size on the least eigenvalue funny, although the authors claim this tool is elegant. After authors add Thm 3.1 in the revision, I realize that the dependence on \\\\lambda_0 might come from the fact that authors do NOT handle the issue of non-differentiability. \\n\\nLet us see a simple example. Assume I have a dataset with \\\\lambda_0 = 1. Now I am adding one more data point (x=0_d, y=1) to the dataset. After adding this sample, \\\\lambda_0 clearly becomes 0. It seems I am just adding a constant 1 to the loss function and the gradient descent can also converge to the global min with a linear convergence rate since the constant does NOT contribution to the gradient. However, it seems the proof does NOT work. This is due to the fact that the \\u201cgradient\\u201d of the non-differentiable points are NOT well defined. Here is a simple example: h(w)=(y-ReLU(w*x))^2, where x= 0, y =1. By the definition provided in this paper (Eq.4), we can easily see that dh/dw = 1 for any w, even if h(w) = 1 for any w. This means that the constant can provide \\u201cfake\\u201d gradient information and make the maximum distance become infinity, (R\\u2019=\\\\inf). Therefore, the whole proof collapses. In fact, changing the gradient definition from I{z>=0} to I{z>0} does not address the issue and we can see this from this example w=g(w)=Relu(w)-Relu(-w) has a zero gradient at w=0. \\n\\nIn summary, the problem considered in this paper where the size m=O(n^6), maximum distance R\\u2019= O(1/n^2) is too easy compared to most problems in practice where m=\\\\Theta(n), R\\u2019=O(1). To address the latter problem, we need a better definition of subgradient and need to analyze the significant contribution of non-linearity and non-differentiability, instead of just viewing them as noises. \\n\\n=================================Appendix===============================\\n\\nThe proof basically follows from the convergence analysis of the following linear regression problem (note that u_j is fixed):\\n \\\\min_{w_1,...,w_m}\\\\sum_{i=1}^{n}(f(x_i;w_1,...,w_m)-y_i)^2 = L(w_1,...,w_m)\\nwhere f(x;w_1,...,w_m)=1/\\\\sqrt{m}\\\\sum_{j=1}^{m} a_j*(w_j^T x)*1{u_j^T x>=0}\", \"gradient_descent_algorithm\": \"-Initialization:\\n-For each j=1,...,m: a_j ~ U({-1,1}), u_j~N(0, I)\\n-Fix a_1,...,a_m, u_1,...,u_m\\n-Update:\\n-For t = 1,...,T\\n w_j(t+1) = w_j(t) - \\\\eta* \\\\nabla_{w_j}L(w_1,...,w_m) for j=1,..., m.\\n\\nIn this problem, since a_j and u_j are fixed, then model f is just a linear model w.r.t. w_j\\u2019s and the above problem is just a simple linear regression problem. Therefore, it is not difficult to prove the linear convergence rate for the gradient descent for the above problem under some mild assumptions. Note that in this paper, u_j(t)= w_j(t) and are not fixed in iterations, i.e., patterns can change. \\n\\n=========================\\nFirst, I apologize to the authors and ACs for the late review, since this paper desearves much more time to judge the quality.\", \"summary\": \"This paper proves that the gradient descent/flow converges to the global optimum with zero training error under the settings (1) the neural network is a heavily over-parameterized ReLU network (i.e., requiting Omega(n^6) neurons); (2) the algorithm update rule \\u201cignores\\u201d the non-differentiable point; (3) the parameters in the output layer (i.e., a_i\\u2019s) are fixed; (4) the data set has some non-degenerate properties and comes from a unit ball. The proof relies on the fact that the Gram matrix is always positive definite on the converging trajectory.\", \"pros\": \"The proof is simple and seems to be correct. The paper is paper is written clearly and easy to follow.\", \"cons\": \"The problem setting considered in this paper does not seem to be difficult enough. The difficulty of analyzing the landscape property of a ReLU network and proving the global convergence of the gradient descent mainly lies in the following three perspective and this paper does not try to tackle any one of them. \\n\\nFirst, it is very hard to characterize the landscape or the convergence trajectory at/ near the non-differentiable point and this paper fails to touch it. The parameter space is separated into several regions by the hyperplanes and the loss function is differentiable in the interior of each region and non-differentiable on the boundary. I believe the very first question authors need to answer is wether there are critical points on the boundary and why the sub-gradient descent escapes from any of these points. However, in this paper, authors avoid this problem by defining an update rule used in practice and this rule does not use the sub-gradient at the non-differentiable point. Thus, it is totally unclear to me wether this global convergence result comes from the fact that this update rule can generally avoid the non-differentiable points on the boundary or the fact that the landscape is so nice such that there are no critical points on the boundary or the fact that all points on the convergence trajectory is differentiable only in this unique problem.\\n\\nSecond, the problem is much easier if the loss is not jointly optimized over the parameters in the first and second layer. Having parameters in one layer fixed does not seem to be a big problem at first glance, but then I realize it indeed makes the problem much easier, which can be seen in the following example. If we randomly sample the weight vector w_i from N(0, I) and only optimize over the parameters in the second layer, then it is straightforward to show the following result.\", \"result\": \"If \\\\lambda_\\\\min(H^\\\\inf)>0 and m=\\\\Omega(n\\\\log n), then with high probability, the loss function L is strongly convex with respect to a=(a_1,\\u2026, a_m) and the loss function is zero at the global minimum.\\n\\nThe above result shows that if we fix the parameters in the first layer and only optimize the parameters in the second layer, it is easy to prove the global convergence with a linear convergence rate. In fact, this result does not require the samples coming from a unit ball and the network size is only slightly over-parameterized. Therefore, if we are allowed to fix the parameters in some layer, how are the result presented in this paper fundamentally different from the above result. \\n\\nAuthors may say that the loss is not convex with respect to the weights in the first layer even if the second layer is fixed. However, when the second layer is fixed, the loss function is smooth and convex in each parameter region and some recent works have shown that in this case, the loss function is a weakly global function. This means that the loss function is similar to a convex function except those plateaus and this further indicates that if the initial point is chosen in a strictly convex basin, the gradient descent is able to converge to a global min. However, the problem becomes far more difficult if the loss is jointly optimized over all parameters in the first and second layer. This can be easily seen since in each parameter region, the loss is no longer a convex function and this may lead to some high order saddle points such that the gradient descent cannot provably escape. Furthermore, the critical points on the boundary can be much more difficult to characterize for this joint optimization problem. \\n\\n\\nThird, the dataset considered in this paper does not seem to be a fundamental pattern and it seems more like a technical condition required by the proof. It is easy to see that a linearly separable dataset does not necessarily satisfy the conditions that 1) the gram matrix is positive definite and that 2) samples come from the surface of a unit ball. Therefore, I do not understand the reason why we need to analyze this pattern. Clearly, in practice, the data samples is unlikely sampled from a ball surface and it is totally unclear to me why the gram matrix is necessarily positive definite. I understand that some technical assumptions are needed in a theoretical work, but I would like to see more discussions on the dataset, e.g., some necessary conditions on the dataset such that the global convergence is possible.\\n\\n\\nLast, I understand that the over-parameterization assumption is needed. In fact, I expect the network size to be of the order Omega(n*ploylog(n)). I am wondering wether Omega(n^6) is a necessary condition or wether there exists a case such that Theta(n^6) is required. \\n\\n\\nAbove all, I believe this paper is a half-baked paper with some interesting explorations. In summary, it cannot deal with non-differential points, which is considered a major difficulty for analyzing ReLU. In addition, it makes an un-justified assumption on some matrix, it requires too many neurons, and fixed 2nd layer. With so many strong assumptions, and compared to related works like [1], Mei et al., Bach and ..., its contribution is rather limited.\\n\\n[1] https://arxiv.org/abs/1702.05777\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting paper studying gradient descent in over-parameterized simple NNs\", \"review\": \"This paper studies one hidden layer neural networks with square loss, where they show that in over-parameterized setting, random initialization + gradient descent gets to zero loss. The results depend on the property of data matrix, but not the output values.\\n\\nThe high level idea of the proof is quite different from recent papers, and it would be quite interesting to see how powerful this is for deep neural nets, and whether any insights could help practitioners in the future.\", \"some_discussions_regarding_the_results\": \"I would suggest the authors to be specific about \\u2018with high probability\\u2019, whether it is 1-c, or 1-n^{-c}. The proof step using Markov\\u2019s inequality gives 1-c probability, which is stated as \\u2018with high probability\\u2019. What about other \\u2018high probability\\u2019 statements?\\n\\nIn the statement of Theorem 3.1 and 4.1, please add \\u2018i.i.d.\\u2019 (independence) for generating w_r s.\\n\\nThe current statement of Lemma 3.2 is confusing. The authors state that given t, w.h.p. (let\\u2019s say 0.9 for now) over initialization, the minimum eigenvalue is lower bounded. This does not imply, for example, that there exists an initialization, such that for 20 different t s, the minimum eigenvalue is lower bounded. The proof uses Markov\\u2019s inequality for a single t. Therefore, I am slightly worried about its correctness. I hope the authors could address my concern. \\n\\nAlso, in the proof of Lemma 3.2, (just to improve the readability,) I would suggest the authors to make it clear that the expectation is taken over the initialization of the weights.\", \"some_typos\": \"\\u2018converges\\u2019 -> \\u2018converges to\\u2019 in the abstract\\n\\u2018close\\u2019 -> \\u2018close to\\u2019 on page 5\\n\\u2018a crucial\\u2019 -> \\u2018a crucial role\\u2019 on page 5\\nIn the proof of Lemma 3.2, x_0 should be x_i\\nwhether using boldface for H_{ij} should be consistent\\n'The next lemma shows we show' in page 6\\n'Markov inequality' -> \\u2018Markov\\u2019s inequality\\u2019\\n\\u2018a fixed a neural network architecture\\u2019 in page 8\\n\\nIt is good to see other comments and discussions on this paper. I believe the authors will make a revision and I would be happy to see the new version of the paper and re-evaluate if some of my comments are not correct.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting result on optimization of two-layer network with ReLU activations\", \"review\": \"This work considers optimizing a two-layer over-parameterized ReLU network with the squared loss and given a data set with arbitrary labels. It is shown that for a sufficiently large number of hidden neurons (polynomially in number of samples) gradient descent converges to a global minimum with a linear convergence rate. The proof idea is to show that a certain Gram matrix of the data, which depends also on the weights, has a lower bounded minimum eigenvalue throughout the optimization process. Then, it is shown that this property implies convergence of gradient descent.\\n\\nThis work is very interesting. Proving convergence of gradient descent for over-parameterized networks with ReLU activations and data with arbitrary labels is a major challenge. It is surprising that the authors found a relatively concise proof in the case of two-layer networks. The insight on the connection between the spectral properties of the Gram matrix and convergence of gradient descent is nice and seems to be a very promising technique for future work. One weakness of the result is the extremely large number of hidden neurons that are required to guarantee convergence.\\n\\nThe paper is clearly written in most parts. The statement of Lemma 3.2 and its application appear to be incorrect as mentioned in the comments. I am convinced by the authors' response and the current proof that it can be fixed by defining an event which is independent of t. Moreover, I think it would be nice to include experiments that corroborate the theoretical findings. Specifically, it would be interesting to see if in practice most of the patterns of ReLUs do not change or if there is some other phenomenon.\\n\\nAs mentioned in the comments, it would be good to add a discussion on the assumption of non-degeneracy of the H^{infty} matrix and include a proof (or exact reference) which shows under which conditions the minimum eigenvalue is positive.\\n\\n-------------Revision--------------\\n\\nI disagree with most of the points that AnonReviewer3 raised (e.g., second layer fixed is not hard, contribution is limited). I do agree that the main weakness is the number of neurons. However, I think that the result is significant nonetheless. I did not change my original score.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An elegant proof on convergence of gradient descent for over-parameterized two-layer ReLU neural networks\", \"review\": \"This paper studies convergence of gradient descent on a two-layer fully connected ReLU network with binary output and square loss. The main result is that if the number of hidden units is polynomially large in terms of the number of training samples, then under suitable randomly initialization conditions and given that the output weights are fixed, gradient descent necessarily converge to zero training loss.\", \"pros\": \"The paper is presented clearly enough, but I still urge the authors to carefully check for typos and grammatical mistakes as they revise the paper. As far as I have checked, the proofs are correct. The analysis is quite simple and elegant. This is one thing that I really like about this paper compared to previous work.\", \"cons\": \"The current setting and conditions for the main result to hold are quite a bit limited. If one has polynomially large number of neurons (i.e. on the order of n^6 where n is number of training samples) as stated in the paper, then the weights of the hidden layer can be easily chosen so that the outputs of all training samples become linearly independent in the hidden layer (see e.g. [1] for the construction, which requires only n neurons even with weight sharing) , and thus fixing these weights and optimizing for the output weights would lead directly to a convex problem with the same theoretical guarantee. At this point, it would be good to explain why this paper is focusing on the opposite setting, namely fixing the output weights and learning just the hidden layer weights, because it seems that this just makes the problem become more non-trivial compared to the previous case while yielding almost the same results . Either way, this is not the way how practical neural networks are trained as only a subset of the weights are optimized. Thus it's hard to conclude from here why the commonly used GD w.r.t. all variables converges to zero loss as stated in the abstract.\\n\\nThe condition on the Gram matrix H_infty in Theorem 3.1 seems to be critical. I would like to see the proof that this condition can be fulfilled under certain conditions on the training data.\\n\\nIn Lemma 3.1, it seems that \\\"log^2(n/delta)\\\" should be \\\"log(n^2/delta)\\\"? \\n\\nDespite the above limitations, I think that the analysis in this paper is still interesting (mainly due to its simplicity) from a theoretical perspective. Given the difficulty of the problem, I'm happy to vote for its acceptance.\\n\\n[1] Optimization landscape and expressivity of deep CNNs\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Thanks for your reply, but sorry, I couldn't see how it helps to answer my question. Looking forward to the revision though.\", \"title\": \"Reply\"}",
"{\"comment\": \"I found the paper interesting to read (although I did not try to check the mathematical correctness of the results).\", \"one_point_could_be_improved_though\": \"several times the text mentions that the main assumption is that \\\"data is non-degenerate\\\" without formally defining what is meant by this. The data matrix is not square so the traditional definition of non-degeneracy does not apply here.\\n\\nWhen reading the theorems, I believe that the informal \\\"non-degenerate data\\\" assumption of the main text corresponds to the double assumptions that each input vectors has unit norm and more importantly that the H_inf kernel matrix is full-rank (non-degenerate).\\n\\nIn practice, this full-rank H_inf kernel assumption is typically not met if there exists duplicated samples in the training set (if there are duplicated samples with different labels, it's not possible to have zero training loss for any model).\\n\\nI just read in your reply (https://openreview.net/forum?id=S1eK3i09YQ¬eId=SJeNOPOU9Q) that you can prove that this assumption is met as soon as there are no two parallel samples in the training set. But I assume that this is not necessarily a problem if the labels of such parallel samples are the same. Furthermore, since you also assume that all x_i have unit norm, a pair of parallel samples is actually a pair of duplicated samples.\\n\\nSo to conclude I would suggest editing your text to change the \\\"non-degenerate data\\\" phrase to something more specific (such as \\\"record-wise normed data without duplicated records\\\" or alternatively \\\"non-degenerate extended feature matrix\\\") so as to avoid any confusion.\", \"title\": \"non-degenerate data\"}",
"{\"comment\": \"Interesting numerical study. I did not know about the analytical relationship between H and the data Gram matrix. I did a more brute-force numerical study of H on a non-random toy dataset (8x8-pixels gray level digits, d=64, n~=1797) and found lambda_0 > 1.3e-2 which is in line with your random data study:\", \"https\": \"//gist.github.com/ogrisel/1b430b2bf1e83173f6061676c62b9f18\", \"title\": \"Another numerical study\"}",
"{\"title\": \"Clarification\", \"comment\": \"Thanks for your question.\\n\\nWe proved that with high probability over initialization, for any weight matrix $W(t)$ that satisfies $w_r(t)$ is close to $w_r(0)$ for all $r \\\\in [m]$, the induced Gram matrix $H(t)$ has lower bounded eigenvalue. Here $t$ is just an index relating the weight matrix and the induced Gram matrix. Note there is only one event which is independent of $t$. \\n\\nWe are sorry about the confusion and we will modify the statement of the lemma to make it more clear in the revised version.\"}",
"{\"comment\": \"Thanks for the inspiring work. I found something confusing about the probability part though.\\n\\nDenote B(t) = the event that at time/iteration t, || w_r(t)-w_r(0)||_2\\\\leq R for all r happens. Denote C(t) = the event that at time/iteration t, the smallest eigenvalue of H(t) is at least \\\\lambda_0/2 happens\\n\\nThen Lemma 3.2 states that the conditional probability Prob[C(k)|B(k)] is large( > 1- c) when c ~ R*n^2/\\\\lambda_0 for a fixed k. However, it is unclear whether C(k)|B(k) implies C(k+1)|B(k+1) from the paper. It is possible that Prob[\\\\cap_k=1^N (C(k)|B(k))] is not high at all, and even could be zero when N approaches infinity.\\n\\nIn the last few lines of proving the induction hypothesis on page 10, it uses Lemma 3.2 that C(k)|B(k) holds with high probability over initialization. But if we review the WHOLE process of proof by induction, in k=1,2.. till infinity, we assume different events hold (assume C(k)|B(k) when proving case k+1), and their relationships are unclear. Thus the \\\"with high probability\\\" statement seems to be not solid to me. No lower bound on Prob[\\\\cap_k=1^\\\\infty (C(k)|B(k))] is proved.\\n\\nI would really appreciate your answer to this!\", \"title\": \"Lemma 3.2 and w.h.p. in proving induction hypothesis\"}",
"{\"comment\": \"I see now; Lemma 3.2 says that the expected number of total changes is small, not zero. Whoops; thanks.\", \"title\": \"Change in activation patterns\"}",
"{\"title\": \"Thanks for your comments and the numerical study!\", \"comment\": \"Thanks for your comments and the numerical study! They are very inspiring!\", \"for_the_analysis\": \"Your intuition is basically correct. We want to clarify that our current proof cannot show for continuous time gradient flow there is no activation pattern change. What we can show is the number of pattern changes is small and only incur small perturbation on H. See Lemma 3.2 and its proof.\", \"extension_to_deep_neural_networks\": \"Yes, it would be very interesting to investigate empirically whether there is only a small amount of pattern changes when training deeper models.\", \"on_lambda_0\": \"Thanks for your numerical study! We agree it would be very interesting to obtain some bounds on lambda_0 under certain distributional assumptions.\"}",
"{\"title\": \"Thanks for your comments and questions!\", \"comment\": \"Thanks for your comments and questions!\\n\\nAs stated in our paper, the results in the two papers you mentioned do not imply why randomly initialized gradient descent can achieve 0 training loss with arbitrary labels. Furthermore, there are many subtle differences in the assumptions. We will definitely expand our discussions on these two papers in the revised version.\", \"dependency_on_d_and_n\": \"our bound depends on lambda_0, which is a dataset-dependent quantity. In general, this quantity is related to d,n and the input distribution.\", \"on_generalization\": \"in general, population risk bound can be obtained only if there are additional assumptions on the input distribution and labels. It is an interesting direction to extend our analysis to incorporate structures in the input distributions and labels.\", \"why_using_uniform_random_initialization_for_the_second_layer\": \"There are two purposes for using this initialization scheme.\\nFirst, as already explained by Dougal, $a_r^2 =1$ makes H matrix independent of a_r and in turn, makes our calculation much easier.\\nSecond, this initialization makes ||y-u(0)||_2 = O(\\\\sqrt{n}). If the output layer are all ones, then u(0) is of order \\\\sqrt{m} which makes ||y-u(0)||_2 be of order \\\\sqrt{mn}. In this case, R' cannot be smaller than R.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for bringing concerns from others! We are happy to answer these concerns. In fact, point 3 and point 4 already resolved some of the issues.\", \"to_point_1\": \"Comparison with the universal approximation theorem.\", \"response\": \"This has been addressed in our previous reply.\", \"to_point_2\": \"Is this a convex problem?\"}",
"{\"title\": \"Correct!\", \"comment\": \"Yes that's the correct formula. Thanks!\"}",
"{\"comment\": \"Thank you Dougal. The assumption on a_r is stated in Theorem 3.1, that a_r \\\\in {-1, 1} (and hence a_r^2=1). This is a perfectly fine assumption for ReLU given its homogeneity. There is also randomization: a_r ~ Unif({-1, 1}). Somehow the role of randomness of a_r is not transparent in the proof. But I suspect it should be important: suppose that I use a_r = 1 for all r (hence no randomization), and since ReLU is non-negative, with strictly negative labels y, there is no way that the network can find y...\\n\\nP.S.: I somehow missed the second part of Dougal's reply, which pointed to the same concern.\", \"title\": \"Reply\"}",
"{\"comment\": \"Not an author and haven't super-carefully checked the proof, but the derivation of (5), at the start of Proof of Theorem 3.1, assumes that a_r^2 = 1. Otherwise H would contain an a_r^2 term multiplying the indicator; if you used a different distribution for a, then everything to do with H is going to depend on that too. That could make things a lot messier....\\n\\nBut that doesn't prevent you from choosing a_r as some weighted distribution on +-1. In particular, you could pick all of the a_r = 1. The only place I see that affecting the continuous-time proof is the Markov's inequality bound for ||y - u(0)|| at the end, which uses E[a_r] = 0. But if you had some other high-probability bound on ||y - u(0)||, which you could definitely get just based on the distribution of W, it seems that the rest of the proof carries through with possibly a bigger m. But that can't be right \\u2013 if all the a_r = 1, f can't output negative values, and nothing else stops any of the y from being negative.... Authors, what am I missing here?\", \"title\": \"Comment on output weights\"}",
"{\"comment\": \"We discussed this in our reading group today, and I'd like to relay some of our thoughts to other readers.\", \"the_paper_randomly_initializes_an_extremely_overparameterized_network\": \"m = Omega(n^6 / lambda_0^4), where lambda_0's dependence on n will vary with the dataset, but presumably it decays with n, making the overall rate for m worse than n^6. Then, here's another way to think about the results of the paper; with high probability:\\n\\n1. There is a global optimum without switching any of the activation patterns, i.e. keeping sign(w_r^T x_i) the same for all i, r. (This isn't directly shown as a separate step in the paper, but it's implied by Theorem 3.1.)\\n\\n2. Following a continuous-time gradient flow leads you to that global optimum, following a path that \\\"looks\\\" strongly convex as you follow it (so you get linear convergence), without ever switching any of the sign(w_r^T x_i), with high probability.\\n\\n3. Discrete-time gradient descent, for a small enough step size O(lambda_0 / n^2), does basically the same thing. It's allowed to switch some of the activation patterns, but only a few of them, S_i (or maybe S_i^\\\\perp, depending on if you go by the definition you give or the way you then use it...). Those ones don't affect the loss too much, and we still have convergence.\\n\\n\\nGiven (1), (2) is maybe not super-surprising: the set of W with the same activation patterns is the intersection of m n linear constraints, and within that set, the objective function is a convex QP. Probably lambda_min(H(0)) is related to lambda_min of the quadratic term in the QP objective, though I couldn't immediately show that. Of course, this doesn't show a result as strong as (2)/(3) without additionally showing you don't happen to break the constraints in following the gradient flow, and and it's circular anyway in that it's not obvious how to show (1) other than through the proof via (2) given here.\\n\\n\\nThe applicability of this approach to deeper networks, then, rests on how realistic the extreme overparameterization here is. Is it still the case that you can avoid switching too many activation patterns in training a deeper network? It would be interesting to track that empirically while training a practical deep net. If switching activation patterns is indeed rare, then this type of approach might be very fruitful for studying deeper nets. Even if not, though, this is an elegant solution to the 1-layer setting.\\n\\n\\nOut of curiosity, I also tried to check numerically what the dependence of lambda_0 is on n for a uniform distribution of inputs. It seems like lambda_0 is about n^{-2} for d=2, n^{-1/2} for d = 5, and n^{-1/4} for d = 10 - https://gist.github.com/dougalsutherland/cc7d8b6d740c6c07d3c6081cfb42d191 . If that's correct, then in 2d the required m is Omega(n^14) (!) while in 10d it's only Omega(n^7), and presumably in very high dimensions it becomes omega(n^6). It might be interesting to try to actually bound lambda_0 in terms of n and d to see if these simulations are accurate. (It might very well be that lambda_0 has a different rate for very large n, with \\\"very large\\\" depending on d; I only ran up to n about 3,000 because I only wanted to run for a few minutes on my desktop.)\", \"title\": \"some thoughts + a bit of a numerical study of lambda_0\"}",
"{\"comment\": \"I would like to give a comment on the relation of this paper and certain prior works. The paper by Chizat and Bach proves continuous-time gradient flow can converge to optimal population loss, in the limit of infinite number of neurons, under certain conditions (which include sigmoid activation, and ReLU at a formal level). Mei et al. proves that noisy SGD can optimize to near optimal population loss. In fact, Mei et al. provides a quantitative statement, that the continuous-time flow and the discrete-time one are close already when the number of neurons >> the dimension of the input (i.e. m>>d as in the notation of this paper). As such, these works already suggest that first-order methods can work well on neural nets with a single hidden layer (in terms of population loss), requiring m>>d.\\n\\nThese two works are briefly mentioned in the paper, but I think it is important to clarify the distinction. The paper, whose analytical approach aligns with many other papers, proves that gradient descent can optimize to optimal empirical loss, for the specific case of ReLU activation. The analysis is nice in its simplicity (and length!), and so I believe many will try to study this type of analysis. The key finding is that when m>>poly(n) (where n is the number of training samples) and when n is large, many things remain close to initialization at all iterations. As such, random initialization works to our advantage.\\n\\nInterestingly the aforementioned two works require m>>d, whereas here m>>poly(n). There is no contradiction since the former analyzes SGD, and this paper analyzes (full-batch) gradient descent. Yet this difference raises a question of whether there is an analysis to unify the picture. There is also a question of generalization performance, which is resolved in the aforementioned two works but not in this paper.\\n\\nI must admit that I have not verified the proof, so it remains to see whether the analysis is correct.\\n\\nAs a clarifying question, is it crucial that the output weight is initialized uniformly random? The role of random initialization for the output weight is not transparent at first glance.\", \"title\": \"comments on relation with prior works\"}",
"{\"comment\": \"Interesting results! It seems to me that the definition of H^{\\\\infty}_{ij} in your main theorems could be simplified as (x_i^T x_j) * arccos(- x_i^T x_j) / (2 * pi) -- am I correct?\", \"title\": \"possible simplification of H^{\\\\infty}\"}",
"{\"comment\": \"Thanks for the author(s) reply.\\n\\nI've just seen some discussions about this paper on another website and here I wanna seen the official reply from the author(s) w.r.t. the following interesting comments, which is also somewhat the concerns of mine. (I simplily repost those discussions)\\n\\n1. \\\"One of the mystery in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks.\\\"\\nHardly a mystery, Cybenko's paper back in 1989 pointed out NN with one hidden layer can approximate any continuous high-dimensional surface without higher degree of smoothness assumption nor being convex, optimization methods like gradient descent is but one of the methods can do the job.\\n\\n2.\\\"For an m hidden node shallow neural network with ReLU activation and n training data, we show as long as m is large enough and the data is non-degenerate, randomly initialized gradient descent converges a globally optimal solution with a linear convergence rate for the quadratic loss function.\\\"\\nAnother falsehood, the assumption of surface with positive eigen-values i.e. non-degenerate (in theorem 3.1 and 4.1 for example) implies convexity of the solution landscape. When the data is non-convex, there is no guarantee nor proof that the gradient descent or other more powerful optimization methods can always find the global optimal. Non-convexity problems pose similar challenges like NP-hard problems: solutions stuck in local optimum and there is no way in general to convert locally best solution to global optimal.\\n\\n3.\\\"Cybenko's paper back in 1989 pointed out NN with one hidden layer can approximate any continuous high-dimensional surface without higher degree of smoothness assumption nor being convex.\\\"\\nCybenko's paper only says that, for a given continuous function and epsilon, there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error. It says nothing about the learnability of this NN (nor even the number of neurons in it).\\n\\n4. \\\"the assumption of surface with positive eigen-values i.e. non-degenerate (in theorem 3.1 and 4.1 for example) implies convexity of the solution landscape.\\\"\\nThe matrix H\\u221e is not the \\\"solution landscape\\\". It's a function of the data only, not the parameters. It is not the Hessian of the loss function, as you seem to think.\\n\\n\\n5.\\\"The key assumption is the least eigenvalue of the matrix H\\u221e is strictly positive. Interestingly, various properties of this H\\u221e matrix has been thoroughly studied in previous work [Xie et al., 2017, Tsuchida et al., 2017]. In general, unless the data is degenerate, the smallest eigenvalue of H\\u221e is strictly positive.\\\"\\nFor example, Xie's paper [1] focus most with spherical data, from section 3. Problem setting and preliminaries.\\n\\n6.\\\"We will focus on a special class of data distributions where the input x \\u2208 Rd is drawn uniformly from the unit sphere, and assume that |y| \\u2264 Y . We consider the following hypothesis class.\\\"\\nMoreover, it also stated:\\n\\\"Typically, gradient descent over L(f) is used to learn all the parameters in f, and a solution with small gradient is returned at the end. However, adjusting the bases {wk} leads to a non-convex optimization problem, and there is no theoretical guarantee that gradient descent can find global optima.\\\"\\n\\nIt said nothing how common or not a given data set is convex like the current paper claimed. We suspect not, in general. Xie mentioned nothing about such data being degenerate.\\n\\nNow to Cybenko's paper:\\n\\\"Cybenko's paper only says that, for a given continuous function and epsilon, there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error. It says nothing about the learnability of this NN (nor even the number of neurons in it).\\\"\\n\\nOnce we know the objective function, and expression of functional form, then the number of hidden layer neurons is a matter of engineering as long as we know \\\"there exists a one-hidden-layer sigmoidal NN with less than epsilon maximum error.\\\", that's learnability of one hidden-layer NN.\", \"references\": \"[1]. Xie, Bo, Yingyu Liang, and Le Song. \\\"Diverse neural network learns true target functions.\\\" arXiv preprint arXiv:1611.03131 (2016).\", \"title\": \"Thanks for Your Reply\"}",
"{\"title\": \"The Gram matrix is not degenerate. The analysis is simple and novel.\", \"comment\": \"We thank for your comments and we are happy to answer your concerns.\\n\\n1) Adding a linear combination of existing features to the data set leads to a degenerate Gram matrix?\\nThis is wrong. Every entry in our Gram matrix is not an inner product between two features, but the result of using a non-linear kernel acting on two features. Please check our definition of the Gram matrix (H^{\\\\infty}) more carefully (c.f. Theorem 3).\\n\\nFor data augmentation with a linear combination of other samples, here we provide a counterexample. \\nWe have two features (1,0), (0,1) and we add a linear combination (1/\\\\sqrt{2},1/\\\\sqrt{2}).\\nThe Gram matrix is \\n[0.5000 0 0.2652; \\n0 0.5000 0.2652; \\n0.2652 0.2652 0.5000 ] \\nwhich is not degenerate.\\nIn general, only if the activation is linear, the Gram matrix becomes degenerate after adding a linear combination of existing features. \\n\\nIn fact, we can easily prove as long as no two features are parallel, H^\\\\infty is always non-degenerate. We will add the proof in the revised version.\\n\\n2) Is this a trivial paper?\\nSimplicity is not equivalent to triviality.\", \"our_result_is_simple\": \"we just prove randomly initialized gradient descent achieves zero training loss for over-parameterized neural networks with a linear convergence rate. However, why randomly initialized first order methods can fit all training data is one of the unsolved open problems in neural network research.\\n\\nFor the same setting (training two-layer ReLU activated neural networks), there are many previous attempts to answer this question but these results often rely upon strong assumptions on the labels and input distributions or do not imply why randomly initialized first order method can achieve zero training loss. Please see the second paragraph on Page 2 and Section 3 for detailed discussions.\\n\\nFor technical contributions, we do agree our analysis is simple but we think this is actually an advantage because it will be easier to generalize simple arguments instead of involved ones. Our proof does not require heavy calculations and reveals the intrinsic properties of over-parameterized neural networks and random initialization schemes. Please see Analysis Technique Overview paragraph on page 2.\\n\\nComparing with [2], except that we use the same property that the patterns do not change by much during training, our analysis is completely different from theirs and is significantly simpler and more transparent. We have devoted a whole paragraph in Section 3 discussing the differences with [2].\\n\\n3) Experiments\\nWe would like to emphasize that this is pure theory paper and the theorem we proved (randomly initialized gradient descent achieves zero training loss) is a well known experimental fact in training neural networks. Nevertheless, we are happy to provide some experimental results in the revised version.\"}",
"{\"comment\": \"The analysis in this paper seems technically sound. However, I have questions w.r.t. this paper: is there any experimental result to support the analysis in this paper? The results are quite simple, and I wish the author(s) could add some experimental validations, even a toy one, to support the theoretical results.\\n\\nBesides, the assumption on the least eigenvalue of the Gram matrix seems somewhat unreasonable, because if we use some data augmentation tricks, such as mix-up [1] (i.e. if there is a training sample that is the linear combination of other samples), the assumption apparently does not hold in this case in the sense that the least eigenvalue of this gram matrix will become zero. However, the adding of one more data seems have little influence on the training procedure.\\n\\nAnother concern is that the analysis and conclusion in this paper is somewhat trivial. There are not much technical contributions in this paper. The technical part follows closely to this work [2].\\n\\n\\n[1] Zhang, Hongyi, et al. \\\"mixup: Beyond Empirical Risk Minimization.\\\" (2018).\\n[2] Li, Yuanzhi, and Yingyu Liang. \\\"Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data.\\\" arXiv preprint arXiv:1808.01204 (2018).\", \"title\": \"Lack of Experimental Results and Unrealistic Assumptions?\"}"
]
} |
|
B1xFhiC9Y7 | Domain Adaptation for Structured Output via Disentangled Patch Representations | [
"Yi-Hsuan Tsai",
"Kihyuk Sohn",
"Samuel Schulter",
"Manmohan Chandraker"
] | Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn strong supervised models like convolutional neural networks. However, these models trained on one data domain may not generalize well to other domains unequipped with annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. To this end, we propose to learn discriminative feature representations of patches based on label histograms in the source domain, through the construction of a disentangled space. With such representations as guidance, we then use an adversarial learning scheme to push the feature representations in target patches to the closer distributions in source ones. In addition, we show that our framework can integrate a global alignment process with the proposed patch-level alignment and achieve state-of-the-art performance on semantic segmentation. Extensive ablation studies and experiments are conducted on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios. | [
"Domain Adaptation",
"Feature Representation Learning",
"Semantic Segmentation"
] | https://openreview.net/pdf?id=B1xFhiC9Y7 | https://openreview.net/forum?id=B1xFhiC9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1x_jcmcXN",
"SygtV5kAJV",
"BJgCBgxOT7",
"SkltviRwa7",
"HkelLIHIpX",
"rkx4VEhr67",
"SkgGog-B6Q",
"SygYxAxS67",
"rkg2BoxBaQ",
"SyeZDVD5n7",
"BkxoIuV5nQ",
"HJxMVo2S3m",
"SJggmaZB37",
"SkxQtQyG2X"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1548528287610,
1544579632799,
1542090822324,
1542085473121,
1541981767732,
1541944363869,
1541898394082,
1541897713213,
1541897027661,
1541203033363,
1541191763052,
1540897577986,
1540853016180,
1540645754632
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper728/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"ICLR.cc/2019/Conference/Paper728/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper728/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper728/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper728/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"comment\": \"There are some local domain adaptation based approaches that can be cited -\\n\\n[1] Courty, Nicolas, et al. \\\"Optimal transport for domain adaptation.\\\" IEEE transactions on pattern analysis and machine intelligence 39.9 (2017): 1853-1865.\\n\\n[2] Das, Debasmit, and CS George Lee. \\\"Sample-to-sample correspondence for unsupervised domain adaptation.\\\" Engineering Applications of Artificial Intelligence 73 (2018): 80-91.\\n\\n[3] Debasmit Das and C.S. George Lee, \\u201cUnsupervised Domain Adaptation Using Regularized Hyper-Graph Matching,\\u201d Proceedings of 2018 IEEE International Conference on Image Processing (ICIP), Athens, Greece, pp. 3758-3762, October 7-10, 2018. \\n\\n[4] Debasmit Das and CS George Lee. \\u201cGraph Matching and Pseudo-Label Guided Deep Unsupervised Domain Adaptation,\\u201d Proceedings of 2018 International Conference on Artificial Neural Networks (ICANN), Rhodes, Greece, pp. 342-352, October 4-7, 2018.\", \"title\": \"Related work on local domain adaptation approaches\"}",
"{\"metareview\": \"The paper explores unsupervised domain adaptation when the output is structured. Here they focus experimentally on semantic segmentation in driving scenes and use the spatial structure of the scene to produce two losses for adaptation: one global and one patch based. The method tackles an important problem and proposes a first attempt at a new solution. While the the experiments are missing ablations and some comparisons to prior work as noted by the reviewers, the authors have provided comments in their rebuttal explaining the relation to the prior work and promising to include more in the revised manuscript.\\n\\nThe paper is borderline, but falls short on the necessary updates requested by reviewers. The use of the structured output which is available in semantic segmentation of driving scenes is a useful direction. The paper is missing enough key results and analysis in it's current form to be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"An attempt at incorporating structure from output into an adaptation pipeline\"}",
"{\"title\": \"Pre-trained model\", \"comment\": \"For the base model we used in our paper, we have not revised the released code from Tsai, et al., CVPR'18. But, while loading the pre-trained weights from ImageNet, it needs to be careful as those weights are originally imported from the Caffe framework.\\n\\nYes, we directly took the 36.6% mIoU from Tsai, et al., CVPR'18, as the same base model is used in our paper. We also used a VGG-16 version with ImageNet pre-training, and can reach 26.4% mIoU on GTA5-to-Cityscapes for the source-only model.\\n\\nWe would be happy to release our code/model after the review process. At this point, we suggest you to directly report your case on their Github page for further assistant.\"}",
"{\"comment\": \"As you reported on the Table2, you can get 36.6%mIoU in source only setting, which is the same result reported on Tsai's paper. However, I have attempted to replace the COCO pre-trained model to Image-Net and use Tsai 's code to train the adaptation task, I only can get 21.1% mIoU and 29.9% mIoU for their source only and output Space approach.\\n\\nMoreover, I have not found any paper reported that they can get 35%+ source only mIoU result trained on ResNet-101 Image-Net pre-trained model. I am wondering that what things you have modified on Tsai's code. And if it is possible, do you mind that open source the code for study after review?\\n\\nFully Convolutional Adaptation Networks for Semantic Segmentation Yiheng Zhang, Zhaofan Qiu, Ting Yao, Dong Liu, Tao Mei, CVPR 2018\", \"road\": \"Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes Yuhua Chen, Wen Li, Luc Van Gool CVPR 2018\", \"title\": \"Pre-trained model\"}",
"{\"title\": \"Pre-trained model\", \"comment\": \"Thanks for pointing this out. Yes, we have found this issue and used the pre-trained model on ImageNet only for ResNet-101.\"}",
"{\"comment\": \"In section 3.5, you mention that you follow the framework used in (Tsai et al., 2018) and the ResNet-101 is pre-trained on ImageNet. However, it seems that the network used in (Tsai et al., 2018) is pre-trained on COCO dataset. The author mentioned it in here. (https://github.com/wasidennis/AdaptSegNet/issues/5)\\n\\nLearning to Adapt Structured Output Space for Semantic Segmentation Y.-H. Tsai and W.-C. Hung and S. Schulter and K. Sohn and M.-H. Yang and M. Chandraker CVPR 2018\", \"title\": \"The ResNet-101 network may not be pre-trained on Image-Net\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the valuable comments. For the work in Chen et al., CVPR\\u201918, we acknowledge their idea of using spatial-aware adaptation on spatial regions in the image (will cite it in the revised paper). However, their idea is more similar to the PatchGan discriminator used in Tsai, et al., CVPR\\u201918 (i.e., spatially global alignment), and is different from the proposed patch-level alignment. Our patch alignment focuses on refining small patches (e.g., 32x64) and is location-independent (as described in Figure 1, introduction, and Section 3.3), while the CVPR\\u201918 works assume fixed local grids with larger regions (e.g., 171x342) that account for the context information. In addition, as shown in the ablation study (without reshaped \\\\hat{F} in Table 1), it shows that the proposed location-independent operation helps patch-level alignment.\\n\\nAlthough the forms of Eq. 3 and Eq. 6 are similar, they are different in the feature alignment space, where Eq. 6 is guided by the clustering process in the clustered space (for both \\\\hat{F}). As the reviewer mentioned, we do assign a cluster to a source patch for constructing the clustered space, and then we align target patches to this space (also visualized in the appendix C), based on the assumption that source and target patch distributions are shared regardless of where they are in original images. For disentanglement, we will drop this term and instead emphasize the learned discriminative feature representations for patch alignment to reduce the confusion.\\n\\nFor the number of clusters K, we find that the result varies on the GTA5-to-Cityscapes dataset. For example, when K is small (e.g., 20), there would be ambiguities for the patch-level alignment process and the performance drops to 41.6, while it is also more difficult to match patches across domains when K is too large (e.g., 200). In practice, we find that within a reasonable range, e.g., K = [30, 80], the IoU is in a range of [42.6%, 43.2%]. For \\\\lambda_d, the goal is to simply perform classification based on clustering, and we find the results do not differ a lot when choosing \\\\lambda_d from a range of [0.005, 0.02]. For \\\\lambda_adv^g, we directly follow the choice from Tsai, et al., CVPR\\u201918, in which they have provided a study. For \\\\lambda_adv^l in a range of [0.00005, 0.001], the results are in a range of [42.7%, 43.2%]. We will provide a complete analysis in the revised paper. Note that, choosing such hyper-parameters is an open question for domain adaptation tasks. We will put this as a future work and we hope that by providing such analysis, it would help the audience better understand the effect of hyper-parameters.\\n\\nWe will add and compare the mentioned papers in the revised manuscript, including Chen et al., CVPR\\u201918 and Saito et al., CVPR\\u201918.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the valuable comments. For the CVPR\\u201918 work (ROAD: Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes), we acknowledge their idea of using spatial-aware adaptation on spatial regions in the image (will cite it in the revised paper). However, their idea is more similar to the PatchGan discriminator used in Tsai, et al., CVPR\\u201918 (i.e., spatially global alignment), and is different from the proposed patch-level alignment. Our patch alignment focuses on refining small patches (e.g., 32x64) and is location-independent (as described in Figure 1, introduction, and Section 3.3), while the CVPR\\u201918 works assume fixed local regions with larger patches (e.g., 171x342) that account for the context information. In addition, as shown in the ablation study (without reshaped \\\\hat{F} in Table 1), it shows that the proposed location-independent operation helps patch-level alignment.\\n\\nFor the number of clusters K, we find that the result varies on the GTA5-to-Cityscapes dataset. For example, when K is small (e.g., 20), there would be ambiguities for the patch-level alignment process and the performance drops to 41.6, while it is also more difficult to match patches across domains when K is too large (e.g., 200). In practice, we find that within a reasonable range, e.g., K = [30, 80], the IoU is in a range of [42.6%, 43.2%]. For \\\\lambda_d, the goal is to simply perform classification based on clustering, and we find that the results do not differ a lot when choosing \\\\lambda_d from a range of [0.005, 0.02]. For \\\\lambda_adv^l in a range of [0.00005, 0.001], the results are in a range of [42.7%, 43.2%]. We will provide a complete analysis in the revised paper. Note that, choosing such hyper-parameters is an open question for domain adaptation tasks. We will put this as a future work and we hope that by providing such analysis, it would help the audience better understand the effect of hyper-parameters.\\n\\nFollowing Tsai, et al., CVPR\\u201918, the source-only performance using VGG is 26.4% and 30.7% on GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes, respectively. For the mIoU v.s. epoch curve on GTA5-to-Cityscapes, since there is no supervision on the target domain, the performance usually fluctuates as most domain adaptation methods do via adversarial learning. In our experiments, the IoUs are [42.6, 42.0, 42.1, 43.2, 42.1] when training for [50, 55, 60, 65, 70] K iterations using a batch size of 1.\\n\\nAlthough the improvement on SYNTHIA-to-Cityscapes is smaller, we find larger gains on certain categories against Tsai, et al., CVPR\\u201918, such as road (3%), sidewalk (2.2%), and sky (1.5%). This is because that the proposed method is designed to overcome domain gaps such as camera pose or field of view via location-independent patch-level alignment. Due to this merit of our approach and the ease of integrating our module into any architectures, we believe that it could be beneficial for other tasks (e.g., depth estimation) that also suffer from the similar issues to semantic segmentation.\\n\\nWe thank for pointing out related works. Different from these methods that mostly focus on the usage of pixel-level domain adaptation (synthesized target images), loss function design, and pseudo label re-training, our work explores a new perspective via patch-level adversarial alignment and the proposed module is general for different architectures or design choices. While the performance is competitive compared to these methods, we believe that our contribution is orthogonal to theirs. We will add and discuss these papers in the revised paper.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Thanks for the positive and valuable comments. For disentanglement, we agree that most domain adaptation methods utilize sort of a similar idea, while our approach tackles it on the patch-level via a clustering process on patches. To reduce the confusion, we will drop the term of disentanglement and instead emphasize the learned discriminative feature representations for patch alignment.\\n\\nIn the ablation study, the one without $L_d$ is actually the same setting as the reviewer suggested, i.e., removing $H$ entirely and simply training another adversarial discriminator similar to $D_g$ directly on patches. Using an additional featurizer $H$ without $L_d$ would not be valid as there is no supervision from $L_d$. We will clarify this study in the revised paper.\\n\\nFor the number of clusters K, we find that the result varies on the GTA5-to-Cityscapes dataset. For example, when K is small (e.g., 20), there would be ambiguities for the patch-level alignment process and the performance drops to 41.6, while it is also more difficult to match patches across domains when K is too large (e.g., 200). In practice, we find that within a reasonable range, e.g., K = [30, 80], the IoU is in a range of [42.6, 43.2]. Note that, we do not really tune this K for different datasets but use the same K=50 for all the scenarios. In the appendix C, we show some examples of clusters and visualizations for patch alignment.\"}",
"{\"title\": \"Review\", \"review\": \"The authors tackle the unsupervised domain adaptation problem on tasks with structured output (in this case, semantic segmentation) by performing adversarial alignment at two levels: globally, using the entire image, and locally, using patches of the image. Their global alignment method matches previous adversarial adaptation approaches, so the primary contribution appears to be their patch-level alignment method. They cluster source image patches by histogramming the corresponding label patches, then performing K-means clustering on the histogrammed label features. A new model is trained to reproduce the cluster labels from the source image patches, and this model is adversarially optimized so that target image patches produce a matching feature distribution.\\n\\nThe paper is well-written and concise. It's organized well, and I had very little trouble following the description of their method. The various components of their model are straightforward and well-motivated. They validate their model on multiple synthetic-to-real segmentation tasks, demonstrating strong performance relative to existing baselines, and they also provide a thorough ablation study showing that each of the components of their proposed model is an important part of their final product, which further convinces the reader that the model is sound.\\n\\nOne quibble is that the authors mention disentanglement quite a bit in this paper, including in the title, though it isn't clear to me what is being disentangled. They claim the use of of label information is a disentangling factor, but that seems to be true of domain adaptation approaches in general, which all attempt to disentangle semantic information from domain-specific details in some form or other. Further clarification on precisely what is being disentangled would be helpful.\\n\\nAnother question that lingers is whether or not the additional classification module $H$ and the clustering are truly necessary. A baseline I would like to see would be to remove $H$ entirely and simply train another adversarial discriminator similar to $D_g$ directly on patches of $O$ instead of the full output. This sounds similar to the ablation experiment mentioned in 4.2 where $L_d$ is removed, but my understanding is that ablation experiment still uses an additional featurizer $H$. A more rigorous exploration of the clustering process, such as visualizations of learned clusters and a study of how the number of clusters affects performance would serve to further validate the model.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting idea for disentangled patch representation learning as a drop-in module for UDA. Method effective but relative weak results compared to SOTA\", \"review\": \"This paper proposes a drop-in module of disentangled patch representation learning for adversarial learning-based domain adaptation. The main idea is to encourage the source patch level representation to be disentangled, by creating certain intermediate pseudo-ground truths via clustering the label patch histograms using k-means. This basically creates an alternative, additional view of prediction target of the network outputs. And similar to global network output alignment by Tsai et al., the authors impose an adversarial loss on the additionally introduced view.\", \"clarity\": \"The paper is well-written with good clarity.\", \"results\": \"This paper has a good experimental validation of proposed module.\", \"concerns\": \"- The idea of using patches in domain adaptation is not completely new. ROAD: Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes, CVPR 2018 also uses the patch level information to help domain adaptation. Although the ideas are not entirely identical, this paper should at least cite and compare this work.\\n\\n- The disentangled patch feature learning introduces two additional loss, L_d and L_adv^l, which require three extra parameters, including K in K-means, lambda_d and lambda_adv^l. It will be great if a formal sensitivity analysis on the parameters can be conducted. There are some details missing in the paper too. For example, what is the performance of the VGG source model without adaptation? I am also curious about the learning behavior of the proposed method. Could you show the mIoU v.s. epoch curve for GTA2Cityscapes, or any other benchmarks?\\n\\n- Although consistently improving over Tsai et al., CVPR18, the introduced methods does not show very significant gain in multiple experiments. On SYNTHIA-to-City, only 0.4 mIoU gain is obtained. In addition, while the proposed method is empirically effective, it is largely task-specific and restricted to domain adaptation for scene parsing only. It seems difficult to generalize the same method to other domain adaptation tasks. The limitation on the performance gain and generalizability somehow reduced the contribution from this work to the community.\\n\\n- A major concern of this work is the lack of citation and direct comparison to multiple previous SOTAs. For example, the paper should compare the end-system performance with several published works such as:\\n1. Zhang et al., Fully convolutional adaptation networks for semantic segmentation, CVPR2018\\n2. Zhu et al., Penalizing top performers: conservative loss for semantic segmentation adaptation, ECCV2018\\n3. Zou et al., Domain adaptation for semantic segmentation via class-balanced self-training, ECCV2018\\nAnd according to the results reported by these works, the proposed joint framework in this paper does not seem very competitive in terms of the UDA performance in multiple settings\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea, but mild novelty and missing experiments\", \"review\": \"This paper introduces a domain adaptation approach for structured output data, with a focus here on semantic segmentation. The idea is to model the structure by exploiting image patches, but account for the fact that these patches may be misaligned, and thus not in exact correspondence. This is achieved by defining new patch classes via clustering the source patches according to the semantic information, and making use of an adversarial classifier on the predicted patch-class distributions.\", \"strengths\": [\"Modeling the structure via patches is an interesting idea.\", \"The proposed method achieves good results on standard benchmarks.\"], \"weaknesses\": \"\", \"method\": [\"The idea of relying on patches to model the structure is not new. This was achieved by Chen et al., CVPR 2018, \\\"ROAD: Reality Oriented Adaptation...\\\". In this work, however, the patches were assumed to be in correspondence, which leaves some novelty to this submission, although reduced.\", \"In essence, the patch-based adversarial alignment remains global; this can be thought of as working at a lower resolution and on a different set of classes, defined by the clusters, than the global alignment. The can be observed by comparing Eq. 3 and Eq. 6, which have essentially the same form. This is fine, but was not clear to me until I reached Eq. 6. In fact, what I understood from the beginning of the paper was an alternative formulation, where one would essentially assign each patch to a cluster and aim to align the distributions of the output (original classes) within each cluster. I suggest the authors to clarify this, and possibly discuss the relation with this alternative approach.\", \"I am not convinced by the claimed relationship to methods that learn disentangled representations. Here, in essence, the authors just perform clustering of the semantic information. This is fine, but I find the connection a bit far-fetched and would suggest dropping it.\"], \"experiments\": [\"The comparison to the state of the art is fine, but I suggest adding the results of Chen et al, CVPR 2018, which achieves quite close accuracies, but still a bit lower. The work of Saito et al., CVPR 2018, \\\"Maximum Classifier Discrepancy...\\\" also reports results on semantic segmentation and should be mentioned here. I acknowledge however that their results are not as good as the one reported here.\", \"While I appreciate the ablation study of Section 4.2, it only provides a partial picture. It would be interesting to study the influence of the exact values of the hyper-parameters on the results. These hyper-parameters are not only the weights \\\\lambda_d, \\\\lambda^g_{adv} and \\\\lambda^l_{adv}, but also the number of clusters and the size of the patches used.\"], \"summary\": \"I would rate this paper as borderline. There is some novelty in the proposed approach, but it is mitigated by the relation to the work of Chen et al., CVPR 2018. The experiments show good results, but a more thorough evaluation of the influence of the hyper-parameters would be useful.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Ablation study for the value K and related work\", \"comment\": \"Thanks for the comments. For the value of K, we find that the result varies on the GTA5-to-Cityscapes dataset. For example, when K is small (e.g., 20), there would be ambiguities for the patch-level alignment process and the performance drops to 41.6, while it is also more difficult to match patches across domains when K is too large (e.g., 200). In practice, we find that within a reasonable range, e.g., K = [30, 80], the IoU is in a range of [42.6, 43.2]. Note that, we do not really tune this K for different datasets but use the same K=50 for all the scenarios.\\n\\nWe appreciate the above-mentioned works for semantic segmentation adaptation. Different from these works that mostly focus on the usage of pixel-level domain adaptation (synthesized target images), loss function design, and pseudo label re-training, our work explores a new perspective via patch-level adversarial alignment for structured output, and the proposed module is general for different architectures or design choices. We will add and discuss these papers in our manuscript.\"}",
"{\"comment\": \"Hi, authors\\nI have a question about the K-means. You use K=50 in all your experiments, and have you try other value of K? \\nI think that the K may vary with different datasets due to the different appearance distributions. An ablation study will make it clear.\\n\\nAnd there are some segmentation adaptation works missing in the related work section.\\nConditional Generative Adversarial Network for Structured Domain Adaptation, CVPR2018\\nFully Convolutional Adaptation Networks for Semantic Segmentation, CVPR2018\", \"penalizing_top_performers\": \"Conservative Loss for Semantic Segmentation Adaptation, ECCV2018\", \"dcan\": \"Dual Channel-wise Alignment Networks for Unsupervised Scene Adaptation, ECCV2018\\nUnsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training, ECCV2018\", \"title\": \"Ablation study for K-means\"}"
]
} |
|
HJedho0qFX | Using Word Embeddings to Explore the Learned Representations of Convolutional Neural Networks | [
"Dhanush Dharmaretnam",
"Chris Foster",
"Alona Fyshe"
] | As deep neural net architectures minimize loss, they build up information in a hierarchy of learned representations that ultimately serve their final goal. Different architectures tackle this problem in slightly different ways, but all models aim to create representational spaces that accumulate information through the depth of the network. Here we build on previous work that indicated that two very different model classes trained on two very different tasks actually build knowledge representations that have similar underlying representations. Namely, we compare word embeddings from SkipGram (trained to predict co-occurring words) to several CNN architectures (trained for image classification) in order to understand how this accumulation of knowledge behaves in CNNs. We improve upon previous work by including 5 times more ImageNet classes in our experiments, and further expand the scope of the analyses to include a network trained on CIFAR-100. We characterize network behavior in pretrained models, and also during training, misclassification, and adversarial attack. Our work illustrates the power of using one model to explore another, gives new insights for CNN models, and provides a framework for others to perform similar analyses when developing new architectures. | [
"Distributional Semantics",
"word embeddings",
"cnns",
"interpretability"
] | https://openreview.net/pdf?id=HJedho0qFX | https://openreview.net/forum?id=HJedho0qFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkghMQAee4",
"ryeMJgOpnm",
"Ske922ec3m",
"H1g9GJoN37"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770323537,
1541402585951,
1541176498162,
1540824850400
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper727/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper727/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper727/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper727/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper aims to study what is learned in the word representations by comparing SkipGram embeddings trained from a text corpus and CNNs trained from ImageNet.\", \"pros\": \"The paper tries to be comprehensive, including analysis of text representations and image representations, and the cases of misclassification and adversarial examples.\", \"cons\": \"The clarity of the paper is a major concern, as noted by all reviwers, and the authors did not come back with rebuttal to address reviewers' quetions. Also, as R1 and R2 pointed out the novelty over recent relevant papers such as (Dharmaretnam & Fyshe, 2018) is not clear.\", \"verdict\": \"Reject due to weak novelty and major clarity issues.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"weak novelty and major clarity issues\"}",
"{\"title\": \"Rough idea. The proposed relationship is not properly confirmed.\", \"review\": \"The authors propose a new method of measuring a knowledge within the learned CNN: the representations of CNN layers and word2vec embeddings and compared, and the similarity between them are calculate. The authors claim that the similarity score increases with learning time, and the higher layers of CNN have more similarity to word2vec embeddings than the lower layers..\\n\\nCNN and word2vec use different datasets. CNN uses the vision pixels and word2vec uses the words in the sentences. A certain amount of representation patterns can be expected to be shared, but surely the extent is limited (correlation 0.9 in Fig. 1). Because of this limitation, the proposed similarity measure must not be claimed as the measure of knowledge accumulation in CNN. \\n\\nIn addition, the authors have to be precise in defining the measure and provide the information captured by the measure. In the literature, I can see \\u201csomething\\u201d is shared by the two algorithms but do not know what is this \\u201csomething.\\u201d The authors claim that \\u201csemantics\\u201d are shared, but replacing \\u201csemantics\\u201d to \\u201csomething\\u201d does not make any difference in this manuscript. Further investigations and confirmations are needed to report which information is actually similar to each other.\", \"minor\": \"the 1 vs. 2 accuracy measure is not defined.\\n\\nIn summary, the proposed measure may capture some information but the explanation about this information is unclear. The information seems to be a rough similar pattern of concept representations. Further rigorous investigation of the proposed measure is necessary to confirm which information is captured. The current version is not sufficient for acceptance.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Too incremental\", \"review\": \"The authors apply an existing method (mainly 2 vs 2 test) to explore the representations learned by CNNs both during/after training.\\n\\n## Strength\\n\\nThe analysis of misclassification and adversarial examples is interesting. The authors also propose potential ways of improving the robustness of DNNs for adversarial examples. \\n\\n\\n## Weakness\\n1. It seems to me that the methodological novelty is limited, which mainly follows [The Emergence of Semantics in Neural Network Representations of Visual Information](http://aclweb.org/anthology/N18-2122). For example this paper extensively applies 2 vs. 2 test which was established in previous works. \\nFurthermore, the first claimed contribution of 5 times more concepts than previous work does not result in any significant difference from the previous approaches. \\n\\n2. The analysis presented in this work does not really give new insights. For example, isn\\u2019t \\u201ca network fitting to noise does not learn semantics\\u201d obvious to the community?\\n\\nSome of the subsection titles are misleading. For example in Section 5, the claim of \\u201cCNNs Learn Semantics from Images\\u201d is mainly proposed in a previous work, but the way of presentation sounds like this is a contribution of this work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Limited contribution. In addition, it's hard to identify the contribution of this paper.\", \"review\": \"This paper extends the previous work (Dharmaretnam & Fyshe, 2018), which provided a analytic tool for understanding CNNs through word embeddings of class labels. By analyzing correlations between each CNN layers and class labels, enables to investigates how each layer of CNNs work, how much it performed well, or how to improve the performance.\\n\\nI felt it is little hard to read this paper. Although the short summary of contributions of this paper in the Introduction, I could not easily distinguish contributions of this paper from the ones of the previous work. It's better to explicitly explain which part is the contributions of this paper in detail. For example, \\\"additional explorations of the behavior of the hidden layers during training\\\" is not clear to me because this expression only explain what this paper do briefly, not what this paper is actually different from the previous work, and how this difference is important and crucial.\\n\\nSimilarly, I could not understand why adding concepts, architectures (FractalNet), datasets (CIFAR-100) is so important. Although this paper states these changes are one of the contributions, it is unclear whether these changes lead to significant insights and findings which the previous work could not find, and whether these findings are so important as contributions of this paper. Again, I think it is better to describe what main contributions of this paper are in more detail.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
rygunsAqYQ | Implicit Maximum Likelihood Estimation | [
"Ke Li",
"Jitendra Malik"
] | Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results. | [
"likelihood-free inference",
"implicit probabilistic models"
] | https://openreview.net/pdf?id=rygunsAqYQ | https://openreview.net/forum?id=rygunsAqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJgDczdXlE",
"SyxxEX5MkN",
"BJgjWUbl1V",
"rylhASZchX",
"HJxKXwZYnm",
"S1xeMzFO3X",
"BJljEH0Aim",
"SJgIh9jniX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544942223450,
1543836455577,
1543669250677,
1541178835532,
1541113632555,
1541079559874,
1540445491384,
1540303534505
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper726/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper726/Authors"
],
[
"ICLR.cc/2019/Conference/Paper726/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper726/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper726/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper726/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper726/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The manuscript proposes a novel estimation technique for generative models based on fast nearest neighbors and inspired by maximum likelihood estimation. Overall, reviewers and AC agree that the general problem statement is timely and interesting, and the subject is of interest to the ICLR community\\n\\nThe reviewers and ACs note weakness in the evaluation of the proposed method. In particular, reviewers note that the Parzen-based log-likelihood estimate is known to be unreliable in high-dimensions. This makes a quantitative evaluation of the results challenging, thus other metrics should be evaluated. Reviewers also expressed concerns about the strengths of the baselines compared. Additional concerns are raised with regards to scalability which the authors address in the rebuttal.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"Sorry for the delay in posting the rebuttal. We've been a bit short on time due to various other deadlines, but below is a rebuttal of the key points that were raised.\", \"anonreviewer1\": \"\", \"theorem1\": \"Any distribution that can be translated and scaled arbitrarily is in a location-sacle family of distributions. This is certainly true of neural nets applied to noise from a fixed distribution, since the biases can be adjusted to translate arbitrarily, and the weights can be adjusted to scale arbitrarily.\\nDifferent models can assign very different log-likelihoods to the same data, and what log-likelihood an autoregressive model or a VAE assigns has no bearing on what a different model assigns at the maximum likelihood estimate. (Moreover, a VAE does not necessarily find the maximum likelihood estimate because the variational family may not contain the true posterior.)\\n\\nPerformance of the particular nearest neighbour search method was reported in the paper that describes it. The code is also publicly available, so you may also test it yourself. In the context of our method, we performed nearest neighbour search for 8,000 queries over 200,000 samples, each of which is 3072-dimensional. Constructing the data structure took 8.01 seconds, and querying took 1.31 seconds on a 4-year-old six-core CPU. This is relatively insignificant compared to the amount of time taken by backpropagation, which takes 181.85 seconds for 100 iterations of SGD on a 1080 Ti GPU.\\nWe pointed out the fact that Euclidean distance can be applied to feature space in the last paragraph of section 2.2, so using Euclidean distance on pixel space does not point to a limitation. In fact, subsequent work on IMLE does this with ease, but that does not mean we should do the same in the original paper. In general, this is how science works - the initial paper on any given method should be applied to the most basic and generic setting, whereas subsequent papers are free to adapt it to particular applications and add bells and whistles. It is critical to keep the initial paper simple, so that the essence of the method is clearly conveyed, free from any add-ons that would make it unclear whether the method works at all without the add-ons, whether the method can be generalized to other domains (since add-ons typically cannot be) and whether performance gains are coming from the core method or the add-ons.\", \"anonreviewer3\": \"Comparing a given set of methods for training a particular model would not offer much conclusive evidence, because performance of generative models is sensitive to both the choice of model and the training method. If a model cannot accurately model the data, the relative performance of given methods does not say much, because the relative ranking of different methods may change on a different, better model. As a case in point, for Gaussian mixture models, E-M converges much more quickly than maximum likelihood. However, there are well-known examples where E-M converges extremely slowly. An experiment demonstrating the former would seem to suggest that E-M is better than maximum likelihood; an experiment demonstrating the latter would seem to suggest the E-M is worse. The truth is that E-M is sometimes better and sometimes worse. What we really care about is whether E-M works well on a model that we care about - because it does work well on a Gaussian mixture model, it has value. So we don\\u2019t agree that comparing various methods on simple models or Real-NVP would necessarily add much value to the paper.\", \"anonreviewer2\": \"First, we point out that our algorithm does *not* match the samples with their nearest data point; it matches each data point with their nearest sample. As explained in section 1.2, this is a subtle, but critical distinction: the former is similar to what a GAN with a nearest neighbour discriminator does and can collapse modes. Only the latter can be equivalent to maximum likelihood (because of the asymmetry of KL-divergence, as explained in our response to the comment below). \\n\\nSee our response to AnonReviewer1 regarding the particular nearest neighbour search algorithm that we used. Please note that the focus of this paper is not on the nearest neighbour search algorithm; please refer to the paper on the algorithm for performance evaluations. \\n\\nFigure 2 shows the stability of training. The lack of vanishing gradients can be shown analytically, since the gradient of squared Euclidean distance can only vanish when the two points coincide exactly. It is difficult to empirically show the lack of mode collapse, since that would involve finding a way to compute recall, but doing so requires globally optimizing over the latent code, for which there is no efficient algorithm.\"}",
"{\"title\": \"No Author Response?\", \"comment\": \"I don't see any author responses or a revised draft. My review is the highest (weak reject) but see no reason to argue for acceptance.\"}",
"{\"title\": \"Novel and interesting idea, but significant algorithmic and empirical concerns\", \"review\": [\"Two high-level points about my review before going into the details:\", \"1. This paper was a thoroughly enjoyable and insightful read. Kudos to the authors for attempting such a comprehensive overview of likelihood-based vs. likelihood-free learning.\", \"2. I\\u2019ll be more than happy to revise my current rating if my concerns are addressed by the authors.\", \"With regards to the technical assessment of this work, the idea of using a nearest neighbors objective for learning a generative model is both intriguing and appealing. What makes this work even more interesting are its connections with maximum likelihood estimation. Novelty aside, I believe there are major theoretical, algorithmic, and empirical concerns in the current work which I discuss below:\", \"Theorem 1\", \"The third condition is true for location-scale family of distributions e.g., Gaussian. But the distribution learned by a generative model p_theta is far from Gaussian or other location-scale distributions.\", \"More importantly, I don\\u2019t think the upper bound is tight in practice because the likelihoods can vary significantly across the dataset. Take MNIST for example. Compare the log-likelihoods of an autoregressive model or ELBOs of a VAE across the different classes of digits. Straight digits (like 1s) have much higher log-likelihoods on average than curved digits.\", \"Algorithm\", \"While significant advancements have indeed been made for nearest neighbor evaluation as the authors highlight, it\\u2019s hard to believe without any empirical evidence that nearest neighbor evaluation is indeed efficient in comparison to other methods of likelihood evaluation.\", \"Similarly, I was a bit disappointed by the choice of Euclidean distance in a pixel space as the choice of distance metric. The argument that you do not want to use \\u201cauxiliary sources of labelled data or leverage domain-specific prior knowledge\\u201d is indeed necessary for fair comparisons, but also points to a limitation of the current approach.\", \"Empirical evaluation\", \"Seems too outdated both in terms of baselines and metrics. The authors are clearly aware of the current research in generative modeling but the current work provides almost no strong evidence to consider this work as an alternative to other approaches.\", \"While it is arguably well-established that Parzen window estimates are misleading (Theis et al.), that\\u2019s the only quantitative estimate in this work (Table 1). Hard to think of any recent published work (last 1-2 years) in generative modeling that even reports these estimates.\", \"The baselines in Table 1 are all from 2013-15. Clearly, much has happened in the last 3 years that merit the inclusion of more recent baselines.\", \"Even for sample quality, there has been a lot of research in designing and improving metrics. E.g., Inception scores, Frechet Inception Distance, Kernel Inception Distance. I am not looking for state-of-the-art numbers, showing heavily zoomed out samples without any of these metrics is slightly disingenuous.\", \"As mentioned before, reporting the computation time/per iteration and number of iterations for convergence for the proposed algorithm in comparison with other approaches is important.\", \"Similarly, the argument about the method avoiding even the other GAN problems (e.g., vanishing gradients, stability in training) can and should be supported by empirical evidence.\", \"Analysis and discussion\", \"One family of generative models that is crucially missing from this work is normalizing flow models.\", \"This is somewhat debatable, but I do not agree that the tradeoff between likelihoods and sample quality is due to model capacity. As far as I can tell, the cited work of Grover et al., 2017 provides evidence contrary to what the authors claim. The prior work trained the same normalizing flow model via maximum likelihood and adversarial training, and observed vastly different results on likelihood and sample quality metrics. So model capacity isn't necessarily the key differentiating factor (which is same for both training algorithms in their experiments), it's more about the choice of the objective function and the optimization procedure.\"], \"minor_points_for_improving_presentation\": [\"Section 3 can be made more concise and to the point. I\\u2019d be especially interested if the precision and recall discussion in this section and elsewhere can be formalized.\", \"Use numbered lists instead of bullets for assumptions in Theorem 1, so that the discussion of the assumptions right after the theorem statement are easy to follow.\", \"The citation of Grover et al. seems outdated? The current title is Flow-GAN: Combining maximum likelihood and adversarial learning in generative models.\", \"In general, avoid making somewhat hard assertions that are speculative. Some of them I\\u2019ve highlighted earlier in my review (e.g., some of the theorem assumptions being typically true, comparison of likelihood and sample quality based on model capacity etc.).\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but there is room for improving the presentation and the strength of the results\", \"review\": \"The paper proposes a new algorithm for implicit maximum likelihood estimation based on a fast Nearest Neighbor search. The algorithm can be used to implicitly maximize the likelihood of models for which the former quantity is not intractable but for which sampling is easy which is typically the case for implicit models. The paper shows that under some conditions the optimal solution of the algorithm corresponds to the MLE solution and provides some experimental evidence that the method leads to a higher likelihood. However, The paper lacks clarity and the experiments are not really convincing. Here are some remarks:\", \"experiments\": [\"The estimated likelihood was reported on table 1 using parson window which is known to have bad scaling behavior with the dimension of data. In the end, the table compares methods that maximize different objectives and are evaluated with an unreliable metric. Here are two possible experiments that could be more informative:\", \"Consider toy examples for which the likelihood can be evaluated and the MLE obtained easily and then compare with the proposed method. This would already give a good sense of how well the algorithm behaves in simple cases.\", \"Another possibility is to use generative models like Real-NVP for which the likelihood can also be computed in closed form. This would allow comparing the proposed algorithm to direct likelihood maximization on more complicated datasets as done in [1].\", \"It seems like having experiments of this nature is far more convincing than a long justification for why the results are not necessarily state-of-the-art.\", \"There are way too many samples on the figures so it is very hard to perform any visual assessment.\"], \"theory\": [\"More discussions of the assumptions are needed, concrete examples for which these assumptions hold or not would be very useful.\", \"Lemma 2 is a direct consequence of the following result: if p is continuous at x_0 then x_0 is a Lebesgue point.\"], \"general_remarks_on_the_paper\": [\"What complexity is the nearest neighbor algorithm? Since it is crucial for the proposed method to be scalable it is worth presenting this algorithm at a high level in the main paper.\", \"The discussion in section 3 could be much more concise if concrete examples and figures were provided. Most of the facts discussed in that section are generally well understood, so conciseness is very appreciated in this case.\", \"\\u00ab\\u00a0\\u00a0A secondary issue that is more easily solvable is that samples presented in papers are sometimes cherry-picked; as a result, they capture the maximum sample quality, but not necessarily the mean sample quality.\\u00a0\\u00bb Could you please provide an example of such paper? I would be very interested in having a closer look.\", \"In the last paragraph of section 5, it is said that although the samples may not be state of the art in terms of precision, other methods which achieve better precision \\u00ab\\u00a0\\u00a0may\\u00a0\\u00bb have less recall. It would be good to have empirical evidence to back this claim.\"], \"revision\": [\"Although this paper presents an interesting idea, there is a serious lack of evidence to support the claims in the paper:\", \"Missing experimental evidence for the efficiency of the NN search algorithm.\", \"Experiments are using Parzen window for estimating likelihood which are known to be unreliable in high dimensions.\", \"None of the suggested experiments were considered. In my opinion these experiments could improve the quality of this work.\", \"Moreover, as mentioned by reviewer 1, Grover et al., 2017 provides evidence contrary to what the authors claim but this was never addressed so far in the paper.\", \"Theorem 1 makes rather strong assumptions: as pointed out by reviewer 1, assumption 3 is unlikely to hold for the distributions used in practice\", \"For these reasons I recommend a clear reject.\", \"[1] I. Danihelka, B. Lakshminarayanan, B. Uria, D. Wierstra, and P. Dayan. Comparison of Maximum Likelihood and GAN-based training of Real NVPs.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice Theory, Questionable Practicality\", \"review\": \"Summary:\\n\\nThis paper proposes a nearest-neighbor-based algorithm for implicit maximum likelihood. Samples are produced by the generator network and then a nearest neighbors algorithm is run to match the samples with their nearest data point. The generator is then updated using the Euclidean distance between samples and neighbors as the optimization objective. Six conditions are then provided, and if they are met, then the authors show that this method is performing maximum likelihood on the implied density. Experiments report Parzen window density estimates, samples from the model, and latent-space interpolations for MNIST, Toronto Faces, and CIFAR-10.\", \"pros\": \"The primary contribution of this paper is an algorithm for implicit likelihood maximization with theoretic guarantees. As far as I\\u2019m aware, this is a novel and noteworthy contribution. Moreover, as each sample must be paired with an observation, it does seem like the algorithm would be somewhat robust to the notorious mode collapse problem.\", \"cons\": \"My primary critique of the paper is that there is very little experimental investigation of the crucial details of the algorithm. Firstly, running the nearest neighbors algorithm seems like it could be a computational bottleneck. The authors acknowledge this, but then say \\u201cthis is no longer the case due to recent advances in nearest neighbor search algorithms (Li & Malik 2016; 2017)\\u201d (p 3-4). No other justification is given, from what I can tell. A simulation showing how the runtime scales with dimensionality or number of data points would be very useful for knowing the scalability and practicality of the algorithm. In the same vein, showing that the algorithm works well even with a relaxation such as approximate neighbors or random projections would make the algorithm more attractive to adopt. \\n\\nMoreover, I found it frustrating that the paper teases a fix to several well-known GAN issues: \\u201cThe proposed method could sidestep the three issues mentioned above: mode collapse, vanishing gradients and training instability\\u201d (p 3). But the paper never experimentally investigates if the proposed approach indeed is better in these aspects. I was disappointed since, intuitively, the algorithm does seem like it could be robust to mode collapse. In addition to this lack of experimental focus, the only quantitative result is the Parzen window estimates in Table 1. The proposed method does best the others but the other reported results are quite old---all from 2015 or earlier.\", \"minor_points\": \"The paper is at 10 pages, and while it is well-written, the writing is verbose and could be use tightening.\", \"evaluation\": \"This paper presents an interesting contribution: an implicit likelihood estimation algorithm amenable to theoretical analysis. Moreover, the theory seems not too divorced from practice (but I didn't check every detail). However, the evaluation of this method is where the paper falters. A big issue (that the authors note themselves) is the practicality of performing repeated nearest neighbor iterations. No runtimes are report, nor are any approximations considered. Rather, samples and interpolations are given the most discussion. Furthermore, there is no demonstrations of training stability or quantitative analysis of mode collapse. Due to these experimental deficiencies, I recommend rejection, weakly.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"It's actually quite different\", \"comment\": \"That's actually not correct. There are two major differences:\\n\\n(1) GLO is not a probabilistic model, because there is no probability density associated with the latent vectors z. Instead it simply learns a mapping from some latent space to the output space, in the hope that the latent space is easier to model than the original output space (in this sense, it is more closely related to dimensionality reduction methods like PCA and autoencoders). In order to generate novel examples, one still needs to fit a probabilistic model to the learned latent vectors. In the case of GLO, a Gaussian is used for this purpose. In contrast, the proposed method trains a stand-alone probabilistic model, and so there is no need to fit another probabilistic model. \\n\\n(2) GLO enforces a 1-1 mapping between the latent vectors z and the data examples, whereas the proposed method does not. Not enforcing a 1-1 mapping is critical for showing equivalence to maximum likelihood, because of the asymmetric nature of KL-divergence. More specifically, maximum likelihood corresponds to minimizing D_KL(data || model), and it is well-known that minimizing D_KL(model || data), which swaps the data and the model distributions, is *not* equivalent to maximum likelihood. Now, suppose that we enforce a 1-1 mapping between the latent vectors and the data examples, then swapping the latent vectors and data examples in the proposed loss function would not change the loss function. This shows that if we were to enforce a 1-1 mapping, minimizing the loss cannot be equivalent to maximum likelihood.\"}",
"{\"comment\": \"As far as I understand, the only difference between the proposed method and the Generative Latent Optimization(https://arxiv.org/abs/1707.05776 , published at ICML 2018) is a very minor detail. The GLO optimizes for the latent codes whereas the submission keeps them fixed. The rest of the method is exactly same. Am I missing something?\\n\\nIf I am correct; the authors should discuss the differences properly, cite the paper and provide an empirical comparison.\", \"title\": \"The method is almost identical to \\\"Generative Latent Optimization\\\"\"}"
]
} |
|
ByG_3s09KX | Dopamine: A Research Framework for Deep Reinforcement Learning | [
"Pablo Samuel Castro",
"Subhodeep Moitra",
"Carles Gelada",
"Saurabh Kumar",
"Marc G. Bellemare"
] | Deep reinforcement learning (deep RL) research has grown significantly in recent years. A number of software offerings now exist that provide stable, comprehensive implementations for benchmarking. At the same time, recent deep RL research
has become more diverse in its goals. In this paper we introduce Dopamine, a new research framework for deep RL that aims to support some of that diversity. Dopamine is open-source, TensorFlow-based, and provides compact yet reliable
implementations of some state-of-the-art deep RL agents. We complement this offering with a taxonomy of the different research objectives in deep RL research. While by no means exhaustive, our analysis highlights the heterogeneity of research
in the field, and the value of frameworks such as ours. | [
"reinforcement learning",
"software",
"framework",
"reproducibility"
] | https://openreview.net/pdf?id=ByG_3s09KX | https://openreview.net/forum?id=ByG_3s09KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkeNKSLGxE",
"H1ll2ehxTX",
"Hkx2TztkTX",
"rJgSH8Io37",
"SygEf9Dvhm",
"B1lGa1pUn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544869244389,
1541615784457,
1541538500361,
1541264956629,
1541007883658,
1540964281955
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper725/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper725/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper725/Authors"
],
[
"ICLR.cc/2019/Conference/Paper725/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper725/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper725/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents Dopamine, an open-source implementation of plenty of DRL methods. It presents a case study of DQN and experiments on Atari. The paper is clear and easy to follow.\\n\\nWhile I believe Dopamine is a very welcomed contribution to the DRL software landscape, it seems there is not enough scientific content in this paper to warrant publication at ICLR. Regarding specifically the ELF and RLlib papers, I think that the ELF paper had a novelty component, and presented RL baselines to a new environment (miniRTS), while the RLlib paper had a stronger \\\"systems research\\\" contribution. This says nothing about the future impact of Dopamine, ELF, and RLlib \\u2013 the respective software.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Supportive of open source DRL frameworks, but this is not a scientific contribution\"}",
"{\"title\": \"Thanks\", \"comment\": \"My feeling towards this is that I think it's perfectly reasonable for ICLR to publish new frameworks. But my view is that the contribution needs to entail a novel capability (i.e. it lets us do something that we couldn't do before, or that would be very hard to do before) as opposed to a well-executed framework that does things that have already been doable.\\n\\nFor example, there are strengths to having a framework which is self-contained, but does it provide new capabilities? \\n\\nThis is just my perspective, apparently the ELF paper got a similar review, but the reviewer changed their mind after comment from the area chair / rebuttal (which we can't see):\", \"https\": \"//media.nips.cc/nipsbooks/nipspapers/paper_files/nips30/reviews/1522.html\"}",
"{\"title\": \"Response to reviews\", \"comment\": \"We would like to thank all the reviewers for their comments.\\n\\nWe feel ICLR is the right venue for this type of contribution, as it is providing a stable, reproducible, and reliable framework for others to use.\", \"similar_frameworks_have_been_previously_introduced_at_comparable_conferences\": \"ELF at NIPS 2017 and RLLib at ICML 2018.\"}",
"{\"title\": \"Needs refinement\", \"review\": \"This paper introduces and details a new research framework for reinforcement learning called Dopamine. The authors give a brief description of the framework, built upon Tensorflow, and reproduce some recent results on the ALE framework.\", \"pros\": \"1. Nice execution and they managed to successfully reproduce recent deep RL results, which can be challenging at times.\", \"cons\": \"1. Given that this is a paper describing a new framework, I expected a lot more in terms of comparing it to existing frameworks like OpenAI Gym, RLLab, RLLib, etc. along different dimensions. In short, why should I use this framework? Unfortunately, the current version of the paper does not provide me information to make this choice. Other than the framework, the paper does not present any new tasks/results/algorithms, so it is not clear what the contribution is.\", \"other_comments\": \"1. The paragraphs in sections 2.1 and 2.2 (algorithmic research, architecture research, etc.) seem to say pretty much the same things. They could be combined, and the DQN can be used as a running example to make the points clear.\\n2. The authors mention tests to ensure reliability and reproducibility. Can you provide more details? Do these tests account for semantic bugs common while implementing RL algorithms?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The contribution is not ready to be published in ICLR\", \"review\": \"Summary:\\nThe authors present an open-source framework TensorFlow-based named Dopamine to facilitate the task of researchers in deep reinforcement learning (deep RL). It allows to build deep RL using existing components such as reinforcement learning agents, as well as handling memory, logs and providing checkpoints for them.\\nEmphasis is given on providing a unified interface to these agents as well as keeping the framework generic and simple (2000 lines of code).\\nThe framework was demonstrated on Atari games notably using Deep Q-network agents (DQN).\\nThe authors provide numerous examples of parameter files that can be used with their framework.\\nPerformance results are reported for some agents (DQN, C51, Rainbow, IQN).\\n\\nGiven the actual trends in deep learning works, unified frameworks such as that proposed is welcome.\\nThe automatization of checkpointing for instance is particularly useful for long running experiments.\\nAlso, trying to reduce the volume of code is beneficial for long-term maintenance and usability.\", \"major_concerns\": [\"This type of contribution may not match the scope of ICLR.\", \"In the abstract and a large fraction of the text, the authors claim that their work is a generic reinforcement learning framework. However, the paper shows that the framework is very dependent on agents playing Atari games. Moreover, the word \\\"Atari\\\" comes out of nowhere on pages 2 and 5.\", \"The authors should mention in the beginning (e.g. in the abstract) that they are handling only agents operating on Atari games.\", \"The positioning of the paper relative to existing approaches is unclear: state of the art is mentioned but neither discussed nor compared to the proposal.\", \"The format of the paper should be revised:\", \"Section 5 (Related Works) should come before presenting the author's work. When reading the preceding sections, we do not know what to expect from the proposed framework.\", \"All the code, especially in the appendices, seems not useful in such a paper, but rather to the online documentation of the author's framework.\", \"What is the motivation of the author's experiments?\", \"Reproduce existing results (claimed on page 1)? Then, use the same settings as published works and show that the author's framework reaches the same level of performances.\", \"Show new results (such as the effect of stickiness)? Then the authors should explicitly say that one of the contributions of the paper is to show new results.\", \"The authors say that they want to compare results in Figure 3. They explain why the same scale is not used. To my opinion, the authors should find a way to bring all comparisons to the same scale.\", \"For all these reasons, I think the paper is not ready for publication at ICLR.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A useful framework, but may not have enough research novelty\", \"review\": \"Review: This paper proposed \\\"Dopamine\\\", a new framework for DeepRL. While this framework seems to be useful and the paper seems like a useful guide for using the framework, I didn't think that the paper had enough scientific novelty to be an ICLR paper. I think that papers on novel frameworks can be suitable, but they should demonstrate that they're able to do something or provide a novel capability which has not been demonstrated before.\", \"strengths\": \"-Having a standardized tool for keeping replay buffers seems useful. \\n\\n-The Dopamine framework is written in Python and has 12 files, which means that it should be reasonably easy for users to understand how it's functioning and change things or debug. \\n\\n-The paper has a little bit of analysis of how different settings effect results (such as how to terminate episodes) but I'm not sure that it does much to help us in understanding the framework. I suppose it's useful to understand that the settings which are configurable in the framework affect results? \\n\\n-The result on how sticky actions affect results is nice but I'm not sure what it adds over the Machado (2018) discussion.\", \"weaknesses\": \"-Given that the paper is about documenting a new framework, it would have been nice to see more comprehensive baselines documented for different methods and settings. \\n\\n-I don't understand the point of 2.1, in that it seems somewhat trivial that research has been done on different architectures and algorithms. \\n\\n-In section 4.2, I wonder if the impact of training mode vs. evaluation mode would be larger if the model used a stochastic regularizer. I suspect that in general changing to evaluation mode could have a significant impact.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
SJMO2iCct7 | A NOVEL VARIATIONAL FAMILY FOR HIDDEN NON-LINEAR MARKOV MODELS | [
"Daniel Hernandez Diaz",
"Antonio Khalil Moretti",
"Ziqiang Wei",
"Shreya Saxena",
"John Cunningham",
"Liam Paninski"
] | Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear evolution and observations. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and latent dynamics from sequential data. The framework includes a structured approximate posterior, and an algorithm that relies on the fixed-point iteration method to find the best estimate for latent trajectories. We apply the method to several datasets and show that it is able to accurately infer the underlying dynamics of these systems, in some cases substantially outperforming state-of-the-art methods. | [
"variational inference",
"time series",
"nonlinear dynamics",
"neuroscience"
] | https://openreview.net/pdf?id=SJMO2iCct7 | https://openreview.net/forum?id=SJMO2iCct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJl6aLrWxV",
"Sk1hV4YRQ",
"rye_9NEY0Q",
"HkxxIgNtAm",
"BkgfAJVKAQ",
"Skl3Q5xYAQ",
"SkenRtDIRm",
"rJgBQzFjnX",
"Bygvx6sc2X",
"rJe98VxNhm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544799941378,
1543222438517,
1543222416116,
1543221320128,
1543221193964,
1543207460157,
1543039444282,
1541276189053,
1541221614546,
1540781137901
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper724/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/Authors"
],
[
"ICLR.cc/2019/Conference/Paper724/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper724/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper724/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers in general like the paper but has serous reservations regarding relation to other work (novelty) and clarity of presentation. Given non-linear state space models is a crowded field it is perhaps better that these points are dealt with first and then submitted elsewhere.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"borderline - but leaning to reject because of reviewer reservations\"}",
"{\"title\": \"Response II to AnonReviewer1\", \"comment\": \"5.- \\\"...exhaustive search is used for finding dimension of latent variable. ... non-parametric approaches to find best latent dimension, .... same technique could be adopted .....\\\"\\n \\nThis is a very interesting idea. In this paper the datasets considered were small enough that performing a simple exhaustive search was feasible and we were able to thoroughly explore how the forward-interpolated paths changed as the latent dimensionality increased. We agree with the reviewer that on larger datasets, it would certainly be interesting to apply these methods with VIND in order to determine the best latent dimension. \\n\\n6.- \\\"...this paper can be named as well: Linderman, Scott, et al. \\\"Bayesian learning and inference in recurrent switching linear dynamical systems.\\\" Artificial Intelligence and Statistics. 2017....\\\"\\n\\nWe were aware of the work by Linderman, cited in the introduction. We have now included a citation to the work of Rahi.\\n \\n7.- It is desired and interesting to see how the model behave one step ahead and K-step ahead prediction. Please address why it cannot be done if there is difficulties in that.\\n \\nIn the manuscript we did evaluate all the tasks using what we called the k-step ahead \\\"forward-interpolation\\\". However, this is essentially prediction, but starting from the most accurate possible estimate of the initial point. This criterium is designed to ascertain how well the fitted dynamics can reproduce the known evolution of the data. We only refrained from calling the procedure \\u201cprediction\\u201d because the whole data is used to estimate the starting point (smoothing). But we do want to emphasize that the only way to determine whether the trained dynamics are a good description of the evolution of the system is to compare synthetic data generated with it with real data. To perform this comparison, we must ensure that the initial latent state, is the best possible estimate of the latent state corresponding to the actual data. This is why that initial smoothing is important. \\n\\nAlthough pure prediction is useful for some applications, forward-interpolation is more appropriate for establishing the quality of the learned model.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the detailed criticism. Below our replies\\n \\n1.- \\\"The main challenge here is to address effectiveness of this model in comparison to other non-linear dynamical system that we can name ... I think authors need to distinguish what this paper can give to community beside approximate posteriori of latent variables that other competing models are not capable of.\\\"\\n\\nWe would like to take the chance to emphasize the main contributions of our paper. As the reviewer remarks, one of the contributions of VIND is a novel structured approximate posterior. However, it is a cardinal point of our work that this approximate posterior (the parent distribution in the manuscript) inherits the terms that describe the evolution in the latent space directly from the Generative Model (GM). Thus, more specifically, regarding the terms describing the latent space dynamics, the posterior is in fact not approximate. That derivation is exact, see Eq. 6. This prescription is also what makes the parent distribution intractable by standard methods. Thus, a third contribution of VIND is a novel algorithm, that allows dealing with that intractable posterior; using the Laplace approximation coupled with a Fixed Point Iteration (FPI) step. \\n\\nTo reiterate, to our knowledge, our method is the first one that allows variational inference on an approximate posterior that inherits the exact nonlinear evolution law from the GM. This is a key contribution of our work. We have updated the introduction to clarify this point.\\n\\nWe agree with the reviewer that a comparison with existing methods for training models with nonlinear dynamics is an important aspect. As mentioned in the paper, we considered recent work on Bayesian learning methods such as Deep Kalman Filters (Krishnan et al, 2015), as well as RNN-based methods such as LFADS (Sussillo et al, 2016), for comparison, and provided qualitative differences between these methods and our approach. We struggled to make definitive quantitative comparisons by applying these methods on our data; since (1) both methods required specific tuning of hyperparameters, and there are many in both methods; (2)the k-steps ahead R^2 measure we use for the most part is not easy to obtain from the publicly available code. We are of the opinion that this metric is a more informative measure of how well the model is performing, as compared to the simple R^2 measures. We will continue to pursue making these quantitative comparisons but at this moment, it is still work in progress.\\n \\n2.- If the aim is to have that posteriori, the authors should show what type of interpretation they have drawn from that in experiments.\\n \\nWe are unsure about what the reviewer is asking us in this question. In the manuscript we presented several conclusions extracted from VIND\\u2019s experiments. In particular, for the Lorenz and single-cell systems we argued that VIND is able to uncover both the underlying dimensions of the system and its dynamics. In the electrophysiology task, we showed that VIND is able to separate between the two trial types corresponding to anterior-posterior pole discrimination. We would like to kindly ask the reviewer to clarify the question.\\n\\n3.- as mentioned in Quality sections authors should be more clear about what is distinguished in this paper that other non-linear dynamical systems\\n \\nAs we emphasized above, a crucial point of the paper - that, in particular, makes it different from other methods for variational inference of nonlinear dynamics - is that inference is performed on an evolution law that is read directly from the proposed Generative Model. We believe that this is partly the reason why the evolution of the system (forward-interpolation) with the VIND-trained dynamics performs so well across tasks. We have added sentences in the introduction and the Conclusions to further make this point clear.\\n\\nWe tried our best to make comparisons as mentioned above. Apart from the issues already mentioned, we felt that fair direct comparisons to the methods for inference of nonlinear dynamics mentioned above (LFADS, DKF) were made difficult by the fact that as far as we are aware, they have been mainly tested by their own developers.\\n \\n4.- they used short form RM for Recognition model or FPI for fixed point iteration that need need to be defined before being used\\n \\nWe would like to point out that RM for Recognition Model was defined on page 2, right before Eq. 1, and FPI for fixed-point-iteration was defined in the Introduction section to the paper. We believe these are the first instances that these two terms were mentioned.\"}",
"{\"title\": \"Response II to AnonReviewer2\", \"comment\": \"5.- \\\"... Gaussian VIND performing better ... pseudo-r^2 instead, Poisson VIND may outperform....\\\"\\n\\nThe sample rate of the data was 60 kHz ephys and binned to 67 ms. We have added this information in the text.\\n \\nWe agree that Poisson VIND is performing much better, especially at forecasting, and also yields smoother dynamics. We are not sure however about how to perform a meaningful comparison of the model with Gaussian observations and the model with Poisson observations using a pseudo R^2. These two models have different likelihoods so - as opposed to the regular R^2 that we used - typical pseudo R^2s, like McFadden\\u2019s, would be computing different quantities. \\n \\n6.- \\\"The supplementary material is essential for this paper. The main text is not sufficient to understand the method.\\\"\\n\\nYes. When designing the paper, and due to the length constraints, we faced a decision between writing a more theoretical paper or writing a paper emphasizing the usefulness of VIND in a varied set of tasks. We ultimately decided for the latter which resulted in important information on the methods been presented as supplementary material. \\n\\nFollowing your suggestion (and that of reviewer #3) we have moved portions of the theoretical appendix to the main text.\\n\\n7.- \\\"This method relies on the fixed point update rule operating in a contractive regime. ... Please add this information.\\\"\\n \\nYes! The fixed-point iteration is in the contractive regime when the absolute value of the determinant of the Jacobian of the iterative map (r in Eq. 12) is smaller than 1. This can be guaranteed for example by ensuring that the entries of the Jacobian are small enough and then invoking the Gershgorin Circle Theorem. For LLDS/VIND, the Jacobian of r is proportional to both the hyperparameter alpha and to the gradients of the evolution network with respect to the latent state. These two are indeed required to be relatively small in order to guarantee the smoothness of the evolution. For instance, in our experiments, we found that choosing alpha ~ 10^-1, and a softplus nonlinearity in the next-to-last hidden layer of the evolution network, ensured gradients small enough to be in the contractive regime as desired.\\n\\nWe have added a paragraph in Appendix A addressing this point.\\n \\n8.- \\\"There's a trial index suddenly appearing in Algorithm 1 that is not mentioned anywhere else.\\\"\\n \\nWe meant a batch index. We fixed it in the text.\\n \\n9.- \\\"Is the ADAM gradient descent in Algorithm 1 just one step or multiple?\\\"\\n \\nWe kindly ask the reviewer to clarify whether this question refers to a) updating all the trainable parameters at once versus specific subsets in some order or b) performing multiple gradient descent steps per FPI within one epoch. If a) then, it is a one step ADAM gradient descent. Regarding b),we tried different setups. One gradient descent step per FPI appears to yield the best results.\\n \\n10.- \\\"MSE -> MSE_k in eq 13\\\"\\n \\nFixed!\\n \\n11.- \\\"LFADS transition function is not deterministic. (page 4)\\\"\\n \\nWe agree that the sentence was ambiguous. \\n\\nIn order to compare VIND to LFADS (or to any other model) we would argue that the fair comparison is among the respective Generative Models. In our understanding, LFADS GM, as read for instance in Eqs. 1-6 in arXiv:1608.06315 has a deterministic transition function, Eq. (3), with a stochastic input. We agree however that the evolution of the full LFADS model is not deterministic due to the presence of the back link from the GM factors to the controller. This turns the evolution of full LFADS, Generative plus Recognition, non-deterministic.\\n\\nWe have removed that sentence from the manuscript\\n \\n12.- \\\"log Q_{phi,varphi} is quadratic in Z for the LLDS case. Text shouldn't be 'includes terms quadratic in Z' (misleading).\\\"\\n \\nBut our log Q_{phi,varphi} is not strictly quadratic in Z for the LLDS, right?, since it contains A(Z) which is an arbitrary nonlinearity?\\n \\n13.- \\\"regular gradient ascent update --> need reference (page 4)\\\"\\n\\nFixed!\\n \\n14- \\\"Due to the laplace approximation step, you don't need to infer the normalization term of the parent distribution. This is not described in the methods (page 3).\\\"\\n \\nIndeed! Added clarifying sentence.\\n \\n15.- \\\"Eq 4 and 5 are inconsistent in notation.\\\"\\n \\nFixed!\\n \\n16.- \\\"Eq (1-6) are not novel but text suggests that it is.\\\"\\n \\nWe improved the text around Eqs. (1-6) and added citations to eliminate misleading claims.\\n \\n17.- \\\"Predict*ive* mean square error (page 2)\\\"\\n \\nFixed.\\n \\n18 - \\\"arXiv papers need better citation formatting.\\\"\\n \\nWe have fixed the arXiv citation style to include arXiv preprint numbers (they seem to be removed by default by the ICLR style file?)\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the comprehensive review. We respond below to all the arguments/objections.\\n \\n1.- \\\"...it would be nice to see actual simulations from the learned LLDS for a longer period of time. For example, is the shape of the action potential accurate in the single cell example? (it should be since the 2 ms predictive r^2 shows around 80%).\\\"\\n \\nAs you recommend, we have included a new appendix with more figures on the Allen data fits. In particular we show a simulation from the learned LLDS for 10, 20 and 30 time steps ahead. In particular, at 30 steps the dynamics is still producing spikes at roughly the right times although some deterioration of performance becomes evident.\\n \\n2.- \\\"Except in Fig 2, the 3 other examples are only compared against GfLDS. Since GfLDS involves nonconvex optimization, it would be reasonable to also request a simple LDS as a baseline to make sure it's not an issue of GfLDS fitting.\\\"\\n\\nWe tried fitting an LDS to the Lorenz data using the PyKalman library that learns the LDS using a standard implementation of the EM algorithm. The algorithm, applied to a dataset of multiple trials with Gaussian observations, was unable to converge for a 3D latent space.The algorithm does converge for a dataset consisting of a single trial but the found dynamics is not generalizable. More than that, the single-trial dynamics performs comparatively poorly even when tested in the data used for training, with the 1-step forward interpolation on training data yielding an average k=1 R2 of .938 (compare with VIND\\u2019s k=1 R^2 of .998 on the Lorenz validation data)\\n\\nThe EM algorithm for learning the LDS does not yield meaningful results in the case of dimensionality expansion (Allen data) either. In that case, it simply copies the data to one of the latent space dimensions and yields the identity transition function. For all these reasons we thought that the LDS baseline was not very informative and decided not to include.\\n\\n3.- \\\"For the r^2=0.49 claim on the left to right brain prediction, how does a baseline FA or CCA model perform?\\\"\\n \\nFollowing your suggestion, we performed a baseline CCA analysis. It provided an R^2 of 0.45, which is smaller, though comparable to the R^2 of 0.49 of VIND. We have included this in the manuscript. Of course, on top of the superior R^2, VIND also has the power to provide a prediction, since it fits a dynamical model, which CCA is not capable of providing.\\n \\n4.- \\\"Was input current ignored in the single cell voltage data? Or you somehow included the input current as observation model?\\\"\\n \\nThe data used in the single-cell voltage experiments was taken from samples were the input current was held constant throughout each trial (although not across different trials). Therefore, for this dataset, the input current behaves as a parameter that varies per trial and roughly determines the region of phase space occupied by the latent trajectory.\\n\\nIn the new appendix we have included a plot where the latent paths of two trials, corresponding to different input currents, are shown. The plot illustrates how these trials occupy different regions in the latent state which we interpret, at least partly, as the representation of the constant input current as a coordinate in the latent space. We should further say that although it is not included in this manuscript, we are working in an extension of VIND to find latent dynamics that accepts arbitrary inputs, such as time-varying current in the Allen data.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the useful comments, below our replies.\\n \\n1. \\\"The clarity is below average. In Section 2 the main method is introduced. However, the motivation and benefits of introducing a parent and child variational approximation are not discussed adequately. It would be helpful to move some of the stuff in the appendix to the main text, and present in a neat way.\\\"\\n \\nWe fully agree with this criticism. Following your suggestion (and also that of reviewer # 2) we have moved some material from the appendix to the main text.\\n\\n2. \\\"I also struggled a little to understand what is the difference between forward interpolate and filtering\\\"\\n \\nIn this work we refer by filtering to the process of inferring the optimal latent state z_t at time t, using observations x_{1:t} from the trial up to time t, not including observations to the future of t. By forward interpolation we refer to the process of smoothing, (inferring optimal z_t from observations of the complete trial x_{1:T}, including points to the future of t), and then evolving the inferred z_t with the learned VIND dynamics. After evolving for k steps, the Generative Model is used to generate data which is subsequently compared with the observations at time t+k. We do not refer to this procedure as \\u201cprediction\\u201d since the initial state z_t for the forward interpolation was obtained by making use of the full data.\\n \\nWe have added clarifying comments at the beginning of section 4.\\n \\n3. \\\"Given the existing body of literature, I found the technical novelty of this paper rather weak\\\"\\n \\nWe would like to reiterate that the novelty of the paper is <i>twofold.</i> \\n\\nFirst and foremost, we propose the use of a novel variational approximate posterior that shares the nonlinear dynamics with the generative model. This feature is powerful because it uses known information about the true posterior in the design of the approximate one. Naively, the feature also seems to be a curse because the variational approximation is rendered intractable for the case of nonlinear dynamics. This is the reason why such approximate posteriors have not been proposed before. We have added a sentence in the introduction emphasizing this crucial point.\\n\\nThe second novelty is a method to deal with this intractability, via the Laplace approximation and the fixed-point iteration method. We showed that the resulting algorithm, which intercalates a gradient step and a FPI step yields very good results in well-known, difficult tasks such as dimensionality expansion in the single cell data or the WFOM task.\\n \\n4.- \\\"abstract: uncover nonlinear observation? -> maybe change \\\"observation\\\" to \\\"latent dynamics\\\"?\\\"\\n \\nThe term \\u2018nonlinear observation\\u2019 in the line \\u201c\\u2026Variational Inference for Nonlinear Dynamics (VIND), that is able to uncover nonlinear observation and transition functions from sequential data \\u2026\\u201c, found in the abstract refers to the observation map in the Generative Model. That is, VIND uncovers both a nonlinear \\u201cobservation\\u201d model, that maps nonlinearly a latent state to the data, and nonlinear latent dynamics mapping the latent state at time t to the state at time t+1, which we refer to as the \\u201cnonlinear transition functions\\u201d. \\n\\nOn the other hand, we agree that \\u201cnonlinear latent dynamics\\u201d is a better fit than \\u201ctransition functions\\u201d for the abstract and we have performed this replacement.\"}",
"{\"title\": \"We are working on the reply, will submit before the Monday deadline\", \"comment\": \"Please allow us to address all the reviewers comments.\"}",
"{\"title\": \"Nice algorithm but need better motivation\", \"review\": \"This paper discusses a algorithm for variational inference of a non-linear dynamical models. In this paper model assumption is to use single stage Markov model in latent space with every latent variable Z_t to be defined Gaussian distributed with mean depends on Z_(t-1) and time invariant variance matrix lambda. The non linearity in transition is encoded in mean of Guassian distribution. For modeling the likelihood and observation model, the Poisson or Normal distribution are used with X_t being sampled from another Gaussian or Poisson distribution with the non-linearty being encoded in the parameters of distribution with variable Z_t. This way of modeling resembles so of many linear dynamical model with the difference of transition and observation distribution have nonlinearity term encoded in them.\", \"the_contribution_of_this_paper_can_be_summarized_over_following_points\": \"- The authors proposed the nonlinear transition and observation model and introduced a tractable inference model using Laplace approximation in which for every given set of model parameter solves for parameters of Laplace approximation of posteriori and then model parameters get updated until converges\\n\\n-the second point is to show how this model is successful to capture the non-linearity of the data while other linear models do not have that capabilities\", \"novelty_and_quality\": \"The main contribution of this paper is summarized above. The paper do not contain any significant theorem or mathematical claims, except derivation steps for finding Laplace approximation of the posteriori. The main challenge here is to address effectiveness of this model in comparison to other non-linear dynamical system that we can name papers as early as Ghahramani, Zoubin, and Sam T. Roweis. \\\"Learning nonlinear dynamical systems using an EM algorithm.\\\"\\u00a0Advances in neural information processing systems. 1999. \\nor more recent RNN paper LSTM based papers. I think authors need to distinguish what this paper can give to community beside approximate posteriori of latent variables that other competing models are not capable of. If the aim is to have that posteriori, the authors should show what type of interpretation they have drawn from that in experiments.\\nThere are lots of literature exist on speech, language models and visual prediction which can be used as reference as well.\", \"clarity\": \"The paper is well written and some previous relevant methods have been reviewed . There are a few issues that are listed below: \\n\\n1- as mentioned in Quality sections authors should be more clear about what is distinguished in this paper that other non-linear dynamical systems \\n\\n2- they used short form RM for Recognition model or FPI for fixed point iteration that need need to be defined before being used\", \"significance_and_experiments\": \"The experiments are extensive and authors have compared their algorithm with some other linear dynamical systems (LDS) competing algorithms and showed improvement in many of the cases for trajectory reconstruction. \\nA few points can be addressed better, it can be seen for many of experiments exhaustive search is used for finding dimension of latent variable. This issue is addressed in Kalantari, Rahi, Joydeep Ghosh, and Mingyuan Zhou. \\\"Nonparametric Bayesian sparse graph linear dynamical systems.\\\"\\u00a0arXiv preprint arXiv:1802.07434\\u00a0(2018). That paper can use non-parametric approaches to find best latent dimension, although the paper applied the technique on linear system, same technique could be adopted to non-linear models. Also that model is capable of finding multiple linear system that model the non linearity by switching between diffrent linear system, for switching linear system, this paper can be named as well: Linderman, Scott, et al. \\\"Bayesian learning and inference in recurrent switching linear dynamical systems.\\\"\\u00a0Artificial Intelligence and Statistics. 2017.\\n\\nIt is shown that the model can reconstruct the spikes very well while linear model do not have that power (which is expected), but it is interesting to see how other non-linear models would compare to this model under those certain conditions\\n\\nIt is desired and interesting to see how the model behave one step ahead and K-step ahead prediction. Please address why it cannot be done if there is difficulties in that.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Excellent method and results but need more comparisons and better writing\", \"review\": \"I'll start with a disclaimer: I have reviewed the NIPS 2019 submission of this paper which was eventually rejected. Compared to the NIPS version, this manuscript had significantly improved in its completeness. However, the writing still can be improved for rigor, consistency, typos, completeness, and readability.\\n\\nAuthors propose a novel variational inference method for a locally linear latent dynamical system. The key innovation is in using a structured \\\"parent distribution\\\" that can share the nonlinear dynamics operator in the generative model making it more powerful compared. However, this parent distribution is not usable, since it's an intractable variational posterior. Normally, this will prevent variational inference, but the authors take another step by using Laplace approximation to build a \\\"child distribution\\\" with a multivariate gaussian form. During the inference, the child distribution is used, but the parameters of the parent distribution can still be updated through the entropy term in the stochastic ELBO and the Laplace approximation. They use a clever trick to formulate the usual optimization in the Laplace approximation as a fixed point update rule and take one fixed point update per ADAM gradient step on the ELBO. This allows the gradient to flow through the Laplace approximation.\\n\\nSome of the results are very impressive, and some are harder to evaluate due to lack of proper comparison. For all examples, the forward interpolate (really forecasting with smoothed initial condition) provides a lot of information. However, it would be nice to see actual simulations from the learned LLDS for a longer period of time. For example, is the shape of the action potential accurate in the single cell example? (it should be since the 2 ms predictive r^2 shows around 80%).\\n\\nExcept in Fig 2, the 3 other examples are only compared against GfLDS. Since GfLDS involves nonconvex optimization, it would be reasonable to also request a simple LDS as a baseline to make sure it's not an issue of GfLDS fitting.\\n\\nFor the r^2=0.49 claim on the left to right brain prediction, how does a baseline FA or CCA model perform?\\n\\nWas input current ignored in the single cell voltage data? Or you somehow included the input current as observation model?\\n\\nAs for the comment on Gaussian VIND performing better on explaining variance of the data even though it was actually count data, I think this maybe because you are measuring squared error. If you measured point process likelihood or pseudo-r^2 instead, Poisson VIND may outperform. Both your forecasting and the supplementary results figure show that Poisson VIND is definitely doing much better! (What was the sampling rate of the Guo et al data?)\\n\\nThe supplementary material is essential for this paper. The main text is not sufficient to understand the method.\\n\\nThis method relies on the fixed point update rule operating in a contractive regime. Authors mention in the appendix that this can be *guaranteed* throughout training by appropriate choices of hyperparameters and network architecture. This seems to be a crucial detail but is not described!!! Please add this information.\\n\\nThere's a trial index suddenly appearing in Algorithm 1 that is not mentioned anywhere else.\\n\\nIs the ADAM gradient descent in Algorithm 1 just one step or multiple?\\n\\nMSE -> MSE_k in eq 13\\n\\nLFADS transition function is not deterministic. (page 4)\\n\\nlog Q_{phi,varphi} is quadratic in Z for the LLDS case. Text shouldn't be 'includes terms quadratic in Z' (misleading).\\n\\nregular gradient ascent update --> need reference (page 4)\\n\\nDue to the laplace approximation step, you don't need to infer the normalization term of the parent distribution. This is not described in the methods (page 3).\\n\\nEq 4 and 5 are inconsistent in notation.\\n\\nEq (1-6) are not novel but text suggests that it is.\\n\\nPredict*ive* mean square error (page 2)\\n\\nIntroduction can use some rewriting.\\n\\narXiv papers need better citation formatting.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Incremental technical contribution but with extensive experimental evaluation\", \"review\": \"The paper presents a variational inference approach for locally linear dynamical models. In particular, the latent dynamics are drawn from a Gaussian approximation of the parent variational distribution, enabled by Laplace approximations with fixed point updates, while the parameters are optimized the resulting stochastic ELBO. Experiments demonstrate the ability of the proposed approach to learning nonlinear dynamics, explaining data variability, forecasting and inferring latent dimensions.\", \"quality\": \"The experiments appear to be well designed and support the main claims of the paper.\", \"clarity\": \"The clarity is below average. In Section 2 the main method is introduced. However, the motivation and benefits of introducing a parent and child variational approximation are not discussed adequately. It would be helpful to move some of the stuff in the appendix to the main text, and present in a neat way. I also struggled a little to understand what is the difference between forward interpolate and filtering.\", \"originality\": \"Given the existing body of literature, I found the technical novelty of this paper rather weak. However, it seems the experiments are thoroughly conducted. In the tasks considered, the proposed method demonstrates convincing advantages over its competitors.\", \"significance\": \"The method shall be applicable to a wide variety of sequential data with nonlinear dynamics.\\n\\nOverall, this appears to be a board-line paper with weak novelty. On the positive side, the experimental validation seems well done. The clarity of this paper needs to be strengthened.\", \"minor_comments\": [\"abstract: uncover nonlinear observation? -> maybe change \\\"observation\\\" to \\\"latent dynamics\\\"?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
B1euhoAcKX | DppNet: Approximating Determinantal Point Processes with Deep Networks | [
"Zelda Mariet",
"Jasper Snoek",
"Yaniv Ovadia"
] | Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique. Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative. | [
"dpp",
"submodularity",
"determinant"
] | https://openreview.net/pdf?id=B1euhoAcKX | https://openreview.net/forum?id=B1euhoAcKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1gcAF3AkV",
"SJgXOvI90m",
"r1gyaOndAQ",
"r1gBYd2O07",
"HJeUZuh_07",
"BkxkYv2ORX",
"B1lR8paT3X",
"r1x_WCGan7",
"HylaX8OF2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544632786412,
1543296875027,
1543190710975,
1543190653087,
1543190526331,
1543190390530,
1541426518356,
1541381632334,
1541142052899
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper723/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper723/Authors"
],
[
"ICLR.cc/2019/Conference/Paper723/Authors"
],
[
"ICLR.cc/2019/Conference/Paper723/Authors"
],
[
"ICLR.cc/2019/Conference/Paper723/Authors"
],
[
"ICLR.cc/2019/Conference/Paper723/Authors"
],
[
"ICLR.cc/2019/Conference/Paper723/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper723/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper723/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses the complexity issue of Determinantal Point Processes via generative deep models.\\n\\nThe reviewers and AC note the critical limitation of applicability of this paper to variable ground set sizes, whether authors' rebuttal is not convincing enough.\\n\\nAC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited applicability\"}",
"{\"title\": \"Additional experiments have been included\", \"comment\": \"Following the recommendation of the reviewers, we have added two experiments to our paper:\\n\\n- A timing comparison between MCMC sampling and DppNet, which shows that DppNet is significantly faster than MCMC sampling.\\n- An evaluation of DppNet for kernel reconstruction via the Nystrom method (a downstream task for which DPPs are known to be successful), which we compare to standard and MCMC sampling. In practice, we see that DppNet's performance on this task matches or outperforms the other baselines.\\n\\nPut together, these experiments show that DppNet is significantly faster than MCMC, while being competitive (or outperforming) MCMC on downstream DPP tasks.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for your comments; we hope the below clarifications answer your questions.\\n\\n(1) Sampled digits from k-medoids: Thank you for catching this oversight; we will include the k-medoid samples in the updated paper.\\n(2) Other measure of performance: we will add an additional evaluation on a downstream task, using DppNet and other baseline methods to sample columns to reconstruct a large kernel with the Nystrom method.\\n(3) GenDpp mode: Yes, this indicates the greedy mode of Algorithm 1; we will clarify this.\"}",
"{\"title\": \"Several clarifications\", \"comment\": \"Thank you for your suggestions regarding the clarity of the paper; we will augment our work with all suggested algorithms and equations.\", \"limited_applicability_to_variable_ground_set_sizes\": \"this is a drawback of our current approach. However, one can easily circumvent this problem in cases where an upper bound N_max on ground set sizes is known: train a DppNet with ground set size N_max, and in all cases where N <= N_max, complete the missing items with placeholder 0 vectors. Algorithm 1 can be trivially modified to take this into account to ensure that these dummy items are not selected.\", \"evaluation_biased_towards_dppnets\": \"our goal is to show that DppNet approximates DPP-like samples (much) better than other reasonable approximations. We did not originally include downstream tasks as DPP have been accepted as a state-of-the-art method for diverse sampling in ML applications (see e.g. recent work such as https://dl.acm.org/citation.cfm?id=3272018 for real-world applications). However, as mentionned to Reviewer 1, we will include a downstream task (kernel reconstruction via the Nystrom method) to further support our claims.\", \"cost_of_evaluating_the_marginal_probabilities\": [\"indeed, this is the costly part of our algorithm (which only impacts training time); this cost is mitigated by two aspects:\", \"Given S, we can compute the probability P(S U {i} | S) for all i simultaneously with no overhead (Eq. 2)\", \"When training a DppNet over varying ground sets, this cost is offset by the fact that we are not learning one but a whole class of DPPs simultaneously.\", \"\\u201cDPP Gao\\u201d: This is a typo; it should read \\u201cDPP Goal\\u201d\", \"K-medoids: Yes, we run the algorithm multiple times with different initializations.\", \"Greedy sampling: Yes, we are stating that greedy sampling with DppNet yields realistic DPP samples, as evidenced by the high DPP log-likelihoods. This is a significant advantage over standard DPPs, since the greedy DPP mode algorithm is costly even with recent improvements [NIPS \\u201818, Hulu].\"]}",
"{\"title\": \"Expected runtimes of various DPP sampling methods; only DppNet benefits from hardware acceleration.\", \"comment\": \"Thank you for your feedback. We will include a comparison to other approximate sampling methods such as the coresets method you mentioned in an updated version of the paper.\\n\\nHowever, we would like to insist upon the following: \\n- The runtime of the coreset sampling method is O(Nk^3), which is the same as the runtime as dual DPP sampling discussed in section 2.\\n- The coreset approach does not have a hardware accelerator-friendly implementation, as it requires iteratively computing elementary symmetric polynomials and many sequential operations; the same holds of MCMC sampling methods.\\nFor this reason, we expect to see DppNet have a drastically faster runtime even when compared to such methods on small datasets. \\n\\nRegarding evaluating DppNet to other methods on applications where there is a gap between uniform and DPP sampling, we agree that doing so will increase the impact of our paper. We are planning on augmenting our experimental section with an evaluation of all methods on the task of reconstructing large kernels via the Nystrom method.\"}",
"{\"title\": \"Clarification re: experimental section\", \"comment\": \"We thank the reviewers for their detailed comments. We would like to clarify the aim of our experimental section: over the past years DPPs have been proven crucial to modeling diversity and quality trade-offs in subset selection problems (recommender systems, kernel reconstruction, \\u2026). For this reason, our experiments aim to show that DppNets approximate DPPs better than other reasonable baselines (which we show by comparing NLLs under the true DPP). Crucially, our experiments do not aim to show that DppNet generates more diverse subsets: showing that DppNet is close to DPP samples is sufficient.\\n\\nHowever, based on the feedback we have received, we plan on incorporating additional experiments to an updated version of the paper to show that DppNet\\u2019s DPP-like samples imply good performance on downstream tasks where DPPs have been shown to be valuable.\"}",
"{\"title\": \"comparison with faster algorithms for sampling from DPPs\", \"review\": \"Determinantal Point Processes provide an efficient and elegant way to sample a subset of diverse items from a ground set. This has found applications in summarization, matrix approximation, minibatch selection. However, the naive algorithm for DPP takes time O(N^3), where N is the size of the ground set. The authors provide an alternative model DPPNet for sampling diverse items that preserves the elegant mathematical properties (closure under conditioning, log-submodularity) of DPPs while having faster sampling algorithms.\\n\\nThe authors need to compare the performance of DPPNet against faster alternatives to sample from DPPs, e.g., https://arxiv.org/pdf/1509.01618.pdf, as well as compare on applications where there is a significant gap between uniform sampling and DPPs (because there are the applications where DPPs are crucial). The examples in Table 2 and Table 3 do not address this.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting paper with good ideas but limited applicability (in its current form)\", \"review\": \"This paper proposes a scaleable algorithm for sampling from DppNets, a proposed model which approximates the distribution of a DPP. The approach builds upon a proposed inhibitive attention mechanism and transformer networks.\\n\\nThe proposed approach and focus on sampling is original as far as I can tell. The problem is also important to parts of the community as DPPs (or similar distributions) are used more and more frequently. However, the applicability of the proposed approach is limited as it is unclear how to deal with varying ground set sizes \\u2014 the authors briefly discuss this issue in their conclusion referring to circumvent this problem by subsampling (this can however be problematic either requiring to sample from a DPP or incurring high probability of missing \\u201eimportant\\u201c items).\\n\\nFurthermore the used evaluation method is \\u201ebiased\\u201c in favor of DppNets as numerical results evaluate the likelihood of samples under the DPP which the DppNet is trained to approximate for. This makes it difficult to draw conclusions from the presented results. I understand that this evaluation is used as there is no standard way of measuring diversity of a subset of items, but it is also clear that \\u201eno\\u201c baseline can be competitive. One possibility to overcome this bias would be to consider a downstream task and evaluate performance on that task. \\n\\nFurthermore, I suggest to make certain aspects of the paper more explicit and provide additional details. For instance, I would suggest to spell out a training algorithm, provide equations for the training criterion and the evaluation criterion. Please comment on the cost of training (constantly computing the marginal probabilities for training should be quite expensive) and the convergence of the training (maybe show a training curve; this would be interesting in the light of Theorem 1 and Corollary 1).\", \"certain_parts_of_the_paper_are_unclear_or_details_are_missing\": [\"Table 3: What is \\u201eDPP Gao\\u201c?\", \"How are results for k-medoids computed (including the standard error)? Are these results obtained by computing multiple k-medoids solutions with differing initial conditions?\", \"In the paper you say: \\u201eFurthermore, greedily sampling the mode from the DPPNET achieves a better NLL than DPP samples themselves.\\u201c What are the implications of this? What is the NLL of an (approximate) mode of the original DPP? Is the statement that you want to make, that the greedy approximation works well?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper proposes DppNet, which approximates determinantal point processes with deep networks by inhibitive attention mechanism. The authors provided a theoretical analysis under some condition that the DppNet is of log-submodularity. Further, some experiments are conducted to show the performance.\", \"review\": \"Quality (5/10): This paper proposes DppNet, which approximates determinantal point processes with deep networks by inhibitive attention mechanism. The authors provided a theoretical analysis under some condition that the DppNet is of log-submodularity.\\n\\nClarity (9/10): This paper is well written and provides a clear figure to demonstrate their network architecture.\\n\\nOriginality (6/10): This paper is mainly based on the work [Vaswani et al, Attention is all you need, 2017]. It computes the dissimilarities by subtracting attention in the original work from one, and then samples a subset by an unrolled recurrent neural network. \\n\\nSignificance (5/10): This paper uses negative log-likelihood as the measurement to compare DppNet with other methods. Without further application, it is difficult to measure the improvement of this method over other methods.\", \"pros\": \"(1) This paper is well written and provides a figure to clearly demonstrate their network architecture.\\n\\n(2) This paper provides a deep learning way to sample a subset of data from the whole data set and reduce the computation complexity.\\n\\nThere are some comments.\\n(1) Figure 4 shows the sampled digits from Uniform distribution, DppNet (with Mode) and Dpp. How about the sampled digits from k-Medoids? Providing the sampled digits from k-Medoids can make the experiments more complete.\\n\\n(2) The object of DppNet is to minimize the negative log-likelihood. The DPP and k-Medoids have other motivations, not directly optimizing the negative log-likelihood. This may be the reason why DppNet has a better performance on negative log-likelihood, even than DPP. Could the authors provide some other measures (like the visual comparison in figure 4) to compare these methods?\\n\\n(3) Does GenDpp Mode in Table 2 mean the greedy mode in Algorithm 1? A clear denotation can make it more clear.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rkxd2oR9Y7 | The Case for Full-Matrix Adaptive Regularization | [
"Naman Agarwal",
"Brian Bullins",
"Xinyi Chen",
"Elad Hazan",
"Karan Singh",
"Cyril Zhang",
"Yi Zhang"
] | Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide novel theoretical analysis
for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices. Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks. | [
"adaptive regularization",
"non-convex optimization"
] | https://openreview.net/pdf?id=rkxd2oR9Y7 | https://openreview.net/forum?id=rkxd2oR9Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxnmZH1gE",
"r1e2sLyfRm",
"B1lP_IyMRm",
"S1xoNIkzAQ",
"SJgzxEO5hQ",
"rJggFt49nX",
"BJgICDN92m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544667427529,
1542743715922,
1542743662832,
1542743602761,
1541207018445,
1541192055614,
1541191630066
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper722/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper722/Authors"
],
[
"ICLR.cc/2019/Conference/Paper722/Authors"
],
[
"ICLR.cc/2019/Conference/Paper722/Authors"
],
[
"ICLR.cc/2019/Conference/Paper722/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper722/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper722/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper shows how to implement a low-rank version of the Adagrad preconditioner in a GPU-friendly manner. A theoretical analysis of a \\\"hard-window\\\" version of the proposed algorithm demonstrates that it is not worse than SGD at finding a first-order stationary point in the nonconvex setting. Experiments on CIFAR-10 classification using a ConvNet and Penn Treebank character-level language modeling using an LSTM show that the proposed algorithm improves training loss faster than SGD, Adagrad, and Adam (measuring time in epochs) and has better generalization performance on the language modeling task. However, if wall-clock time is used to measure time, there is no speedup for the ConvNet model, but there is for the recurrent model. The reviewers liked the simplicity of the approach and greatly appreciated the elegant visualization of the eigenspectrum in Figure 4. But, even after discussion, critical concerns remained about the need for more focus on the practical tradeoffs between per-iteration improvement and per-second improvement in the loss and the need for a more careful analysis of the relationship of this method to stochastic L-BFGS. A more minor concern is that the term \\\"full-matrix regularization\\\" seems somewhat deceptive when the actual regularization is low rank. The AC also suggests that, if the authors plan to revise this paper and submit it to another venue, they consider the relationship between GGT and the various stochastic natural gradient optimization algorithms in the literature that differ from GGT primarily in the exponent on the Gram matrix.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs more focus on wall clock time and more analysis of the relationship to similar approaches\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks for the review.\", \"there_are_two_significant_inaccuracies\": \"1. GGT does not take the view of a low-rank *approximation*. This is a central point of the paper.\\n2. Re: iterative methods: the preconditioner is a -1/2 power of the Gram matrix, not the inverse.\", \"more_details_below\": \"@Inverse square root: We are fully aware of the distinction.\\n- Note that iterative solvers like conjugate gradient do not immediately apply here, as we are solving a linear system in M^{1/2}, not M.\\n- Krylov subspace iterative solvers suffer from a condition number dependence, incurring a hard tradeoff between iteration complexity and \\\\eps. [1]\\n- We actually *did* try polynomial approximations to M^{-1/2} as an alternative to our proposed small-SVD step. We saw worse approximation (the condition number dependence kicks in) and worse GPU performance (parallel computation time scales with polynomial degree).\\n\\n@Full-matrix terminology: The use of \\u201cfull-matrix\\u201d to distinguish from \\u201cdiagonal-matrix\\u201d is standard, and taken directly from [2].\\n\\n@Full-matrix vs. full-rank: Note that we do not consider the windowed Gram matrix to be an \\u201capproximation\\u201d of the \\u201cfull\\u201d gram matrix. The window is for the purpose of forgetting gradients from the distant past, motivated by (1) our theory, (2) the small-scale synthetic experiments, and (3) the extreme ubiquity of Adam and RMSprop, which do the same. Note that we do no approximation on the windowed Gram matrix, the fact that it is low rank is a feature.\\n\\n@Location of \\\\mu definition: Is the reviewer\\u2019s suggestion simply to move this definition into the intro?\\n\\n@Comparison with second-order methods: Please refer to our response to Reviewer 1 for some additional comments.\\n\\n@Tweaks: We don\\u2019t believe that any of the tweaks should be so controversial.\\n- The \\\\eps parameters are present in *every* adaptive optimizer, for stability. The interpolation with SGD is just another take on this.\\n- The exponential smoothing of the first moment estimator is a subtler point. As we point out in Appendix A.2, in the theory for Adam/AMSgrad [3,4], \\\\beta_1 *degrades* the moment estimation, yet everyone uses momentum in practice. Even if this is unconvincing, the performance gap upon removing this tweak is minor, and our empirical results hold without this tweak. We are simply offering a heuristic that we have observed to help training unconditionally, just like momentum in Adam.\\n\\n@Informal main theorem: By \\u201cinformal\\u201d we truly mean that we are suppressing the smoothness constants (L, M) for readability and space constraints. We are simply adopting the widespread practice of deferring the non-asymptotic mathematical statement to the appendix.\\n\\n[1] Tight complexity bounds for optimizing composite objectives. Blake E Woodworth, Nati Srebro. NIPS 2016. \\n[2] Adaptive subgradient methods for online learning and stochastic optimization. J Duchi, E Hazan, Y Singer. JMLR 2012. \\n[3] Adam: A Method for Stochastic Optimization. D.Kingma,J. Ba. ICLR 2015.\\n[4] On the Convergence of Adam and Beyond. S. Reddi, S. Kale, S. Kumar. ICLR 2018.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for the review.\\n\\n@Wall-clock: We don\\u2019t quite understand the question. As mentioned in the response to Reviewer 3, our NLP example does answer the natural question about end-to-end gains. Is the reviewer only concerned with the location of the plots?\\n- Another note: to perform a full wall-clock comparison with algorithms that have different per-iteration costs, one must disentangle and retune various hyperparameter choices, most notably the learning rate schedule. Thus we decided to feature the per-iteration comparison in the main paper, as it is the cleanest one.\\n\\n@L-BFGS: On a high level, we agree that GGT develops a similar window-based approximation to the gradient Gram matrix as L-BFGS does to the approximated Hessian. While adaptive methods have proven effective in practice, quasi-Newton algorithms are not in general regarded as competitive for deep learning (despite recent efforts [1,2]), and that\\u2019s why it is not compared to in the vast majority of deep learning papers.\\n- Quasi-Newton methods are suited for deterministic problems, while stochasticity is crucial in deep learning. This is because they try to approximate the Hessian by finite differences, which seems unstable with stochastic gradients in practice.\\n- Direct second-order methods require significant modifications to converge in the non-convex setting (see [3,4]). Even these have not been observed to work well in deep learning.\\n- One reason for the practical success of AdaGrad-like algorithms we believe is the difference of -1/2 vs. -1 power on the Gram matrix, which seems to change the training dynamics dramatically. With the gradient Gram matrix and a -1 power, meaningful end-to-end advances have only been claimed for niche tasks other than classification.\\n\\n[1] Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies. R. Zhao and W. Haskell and V. Tan. arXiv, 2017.\\n[2] A Stochastic Quasi-Newton Method for Large-Scale Optimization. R. Byrd, S. Hansen, and J. Nocedal, and Y. Singer SIAM Journal on Optimization, 2016.\\n[3] Accelerated methods for nonconvex optimization. Y. Carmon, J. Duchi, O. Hinder, A. Sidford. SIAM Journal on Optimization, 2018.\\n[4] Finding approximate local minima faster than gradient descent. N. Agarwal, Z. Allen-Zhu, B. Bullins, E. Hazan, and T. Ma. STOC 2017.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for the review.\\n\\n@Update overhead: We argue that per-iteration performance is a worthwhile objective in itself, which is less significant in some scenarios (e.g. costly function evaluation, like in RL, or expensive backprops, like in RNNs). That said, we were indeed not able to demonstrate end-to-end gains in vision. Please note that in the NLP benchmark our algorithm finds a better solution and wins in wall-clock time.\\n\\n@Switching: This is a good suggestion, and we indeed do cite one of the papers attempting to approach optimizer-switching in a principled way. We found that we could squeeze out some wall-clock gains by applying the expensive update more sparingly, but the value of including this in the paper was unclear (effectively adding a host of hyperparameters orthogonal to the central idea).\"}",
"{\"title\": \"How to make sgd with full matrix pre-conditioning scalable?\", \"review\": \"adaptive versions of sgd are commonly used in machine learning. adagrad, adadelta are both popular adaptive variations of sgd. These algorithms can be seen as preconditioned versions of gradient descent where the preconditioner applied is a matrix of second-order moments of the gradients. However, because this matrix turns out to be a pxp matrix where p is the number of parameters in the model, maintaining and performing linear algebra with this pxp matrix is computationally intensive. In this paper, the authors show how to maintain and update this pxp matrix by storing only smaller matrices of size pxr and rxr, and performing 1. an SVD of a small matrix of size rxr 2. matrix-vector multiplication between a pxr matrix and rx1 vector. Given that rxr is a small constant sized matrix and that matrix-vector multiplication can be efficiently computed on GPUs, this matrix adapted SGD can be made scalable. The authors also discuss how to adapt the proposed algorithm with Adam style updates that incorporate momentum. Experiments are shown on various architectures (CNN, RNN) and comparisons are made against SGD, ADAM.\", \"general_comments\": \"THe appendix has some good discussion and it would be great if some of that discussion was moved to the main paper.\", \"pros\": \"Shows how to make full matrix preconditioning efficient, via the use of clever linear algebra, and GPU computations.\\nShows improvements on LSTM tasks, and is comparable with SGD, matching accuracy with time.\", \"cons\": \"While doing this leads to better convergence, each update is still very expensive compared to standard SGD, and for instance on vision tasks the algorithm needs to run for almost double the time to get similar accuracies as an SGD, adam solver. This means that it is not apriori clear if using this solver instead of standard SGD, ADAM is any good. It might be possible that if one performs few steps of GGT optimizer in the initial stages and then switches to SGD/ADAM in the later stages, then some of the computational concerns that arise are eliminated. Have the authors tried out such techniques?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Elegant idea, but the I'm not convinced that the benefits outweigh the increased computational cost\", \"review\": \"The authors seek to make it practical to use the full-matrix version of Adagrad\\u2019s adaptive preconditioner (usually one uses the diagonal version), by storing the r most recently-seen gradient vectors in a matrix G, and then showing that (GG^T)^(-\\u00bd) can be calculated fairly efficiently (at the cost of one r*r matrix inversion, and two matrix multiplications by an r*d matrix).\\n\\nThis is a really nice trick. I\\u2019m glad to see that the authors considered adding momentum (to adapt ADAM to this setting), and their experiments show a convincing benefit in terms of performance *per iteration*. Interestingly, they also show that the models found by their method also don\\u2019t generalize poorly, which is noteworthy and slightly surprising.\\n\\nHowever, their algorithm--while much less computationally expensive than true full-matrix adaptive preconditioning---is still far more expensive than the usual diagonal version. In Appendix B.1, they report mixed results in terms of wall-clock time, and I strongly feel that these results should be in the main body of the paper. One would *expect* the proposed approach to work better than diagonal preconditioning on a per-iteration basis (at least in terms of training loss). A reader\\u2019s most natural question is whether there is a large enough improvement to offset the extra computational cost, so the fact that wall-clock times are relegated to the appendix is a significant weakness.\\n\\nFinally, the proposed approach seems to sort of straddle the line between traditional convex optimization algorithms, and the fast stochastic algorithms favored in machine learning. In particular, I think that the proposed algorithm has a more-than-superficial resemblance to stochastic LBFGS: the main difference is that LBFGS approximates the inverse Hessian, instead of (GG^T)^(-\\u00bd). It would be interesting to see how these two algorithms stack up.\\n\\nOverall, I think that this is an elegant idea and I\\u2019m convinced that it\\u2019s a good algorithm, at least on a per-iteration basis. However, it trades-off computational cost for progress-per-iteration, so I think that an explicit analysis of this trade-off (beyond what\\u2019s in Appendix B.1) must be in the main body of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"see review\", \"review\": \"The paper considers adaptive regularization, which has been popular in neural network learning. Rather than adapting diagonal elements of the adaptivity matrix, the paper proposes to consider a low-rank approximation to the Gram/correlation matrix.\\n\\nWhen you say that full-matrix computation \\\"requires taking the inverse square root\\\", I assume you know that is not really correct? As a matter of good implementation, one never takes the inverse of anything. Instead, on solves a linear system, via other means. Of course, approximate linear system solvers then permit a wide tradeoff space to speed things up.\", \"there_are_several_issues_convolved_here\": \"one is ``full-matrix,'' another is that this is really a low-rank approximation to a matrix and so not full matrix, another is that this may or may not be implementable on GPUs. The latter may be important in practice, but it is orthogonal to the full matrix theory.\\n\\nThere is a great deal of discussion about full-matrix preconditioning, but there is no full matrix here. Instead, it is a low-rank approximation to the full matrix. If there were theory to be had here, then I would guess that the low-rank approximation may work even when full matrix did not, e.g., since the full matrix case would involve too may parameters.\\n\\nThe discussion of convergence to first order critical points is straightforward.\\n\\nAdaptivity ratio is mentioned in the intro but not defined there. Why mention it here, if it's not being defined.\\n\\nYou say that second order methods are outside the scope, but you say that your method is particularly relevant for ill-conditioned problems. It would help to clarify the connection between the Gram/correlation matrix of gradients and the Hessian and what is being done to ill-conditioning, since second order methods are basically designed for ill-conditioned problems..\\n\\nIt is difficult to know what the theory says about the empirical results, given the tweaks discussed in Sec 2.2, and so it is difficult to know what is the benefit of the method versus the tweaks.\\n\\nThe results shown in Figure 4 are much more interesting than the usual training curves which are shown in the other figures. If this method is to be useful, understanding how these spectral properties change during training for different types of networks is essential. More papers should present this, and those that do should do it more systematically. \\n\\nYou say that you \\\"informally state the main theorem.\\\" The level of formality/informality makes it hard to know what is really being said. You should remove it if it is not worth stating precisely, or state it precisely. (It's fair to modularize the proof, but as it is it's hard to know what it's saying, except that your method comes with some guarantee that isn't stated.)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HyevnsCqtQ | Integral Pruning on Activations and Weights for Efficient Neural Networks | [
"Qing Yang",
"Wei Wen",
"Zuoguan Wang",
"Yiran Chen",
"Hai Li"
] | With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment. This work aims to advance the compression beyond the weights to the activations of DNNs. We propose the Integral Pruning (IP) technique which integrates the activation pruning with the weight pruning. Through the learning on the different importance of neuron responses and connections, the generated network, namely IPnet, balances the sparsity between activations and weights and therefore further improves execution efficiency. The feasibility and effectiveness of IPnet are thoroughly evaluated through various network models with different activation functions and on different datasets. With <0.5% disturbance on the testing accuracy, IPnet saves 71.1% ~ 96.35% of computation cost, compared to the original dense models with up to 5.8x and 10x reductions in activation and weight numbers, respectively. | [
"activation pruning",
"weight pruning",
"computation cost reduction",
"efficient DNNs"
] | https://openreview.net/pdf?id=HyevnsCqtQ | https://openreview.net/forum?id=HyevnsCqtQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJepFhvVxN",
"SJlhCAYtR7",
"rygCXpHD67",
"BkebwFHv6Q",
"rkegDEBw67",
"ByxcYlki3Q",
"rylDDl2c27",
"ByxdPQb5nm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545006213190,
1543245523707,
1542049062354,
1542048089437,
1542046807743,
1541234817727,
1541222495395,
1541178207861
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper721/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper721/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper721/Authors"
],
[
"ICLR.cc/2019/Conference/Paper721/Authors"
],
[
"ICLR.cc/2019/Conference/Paper721/Authors"
],
[
"ICLR.cc/2019/Conference/Paper721/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper721/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper721/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes to compress the deep learning model using both activation pruning and weight pruning. The reviewers have a consensus on rejection due to lack of novelty.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"lack of novelty\"}",
"{\"title\": \"Reply to author's response\", \"comment\": \"The authors have commented on the major issues regarding the time complexity on selection of winner rate per layer, compared their method against existing channel/layer based pruning methods and agreed to correct few minor issues. The authors have empirically observed that searching for the right set of choices for winner rate at every layer are computationally efficient and considered negligible when compared to entire training time.\\n\\nAlthough the authors compare their method against existing pruning techniques in terms of pruned weights and saved MAC operations, the numbers reported from their method ar only valid on dedicated DNN accelerator design platform,s but not on conventional hardware. I would like to clearly state that the comparison of MAC's against existing techniques would make sense only if it is computed based upon the conventional hardware settings. As stated in the conclusion of the article and based on the above set of comparisons, a specially designed hardware is essential to leverage the activation sparsity induced by their method. I would highly recommend the authors to compare the inference time of all the networks (including unpruned network, pruned networks from existing pruning techniques and networks obtained from their method) on their specially designed dedicated DNN accelerator hardware platforms. Besides, I would suggest the authors to compare similar architectures under different techniques, for e.g. VGG baseline, VGG on existing techniques and VGG with their IP technique. These comparisons would support claim of the paper. Finally, the take-away message from the current version of the paper is not very clear from the numbers or comparisons and might not be interesting for the audience of ICLR. I would not revise my rating and reject this submission.\"}",
"{\"title\": \"Novelty, Winner Rate Layer-wise, and Comparison\", \"comment\": \"Thanks a lot for your suggestions.\\n\\n-- The originality of the approach\\n\\nWhile weight pruning technique adopted in this paper is on-the-shelf, the dynamic activation pruning is first proposed here. Before our work, the research focus is on the static structured/unstructured weight pruning. \\n\\n-- Winner Rate Layer-wise\\n\\nThanks for your deep insight here. We agree with your concern about potential suboptimal results. To be honest, the direct motivation to do winner rates searching was to make the table concise to fit page limit. We will add more experiments here and in the Appendix. \\n\\nFirst, we answer the question about searching complexity of winner rates. It takes about 20min to complete the winner rate scanning for ResNet-32 group-wise as shown in Fig.5 (b). For exploring winner rates layer-wise, it takes us about 2 hours. When setting the winner rates layer-wise, an appropriate winner rate is chosen for each layer with the accuracy drop less than a certain threshold. In short, winner rate searching is negligible compared to training. \\n\\nHere, we show the experiment results on ResNet-32 layer-wise. The chosen winner rates for the first 31 layers except the last fully connected layer are: \\n[0.5, 0.3, 0.3, 0.4, 0.3, 0.4, 0.3, 0.1, 0.4, 0.1, 0.3, 0.3, 0.3, 0.3, 0.1, 0.5, 0.3, 0.4, 0.4, 0.3, 0.3, 0.5, 0.4, 0.5, 0.5, 0.5, 0.3, 0.2, 0.1, 0.2, 0.1]. \\nThese activation winner rates are applied on the same weight-pruned ResNet-32 as in the paper, we can get an improved IPnet for ResNet-32 with a 94.61% accuracy on CIFAR-10 with 11.6% left MACs as in the following table. Better accuracy and better computation reduction are both obtained. \\n\\nApproach \\tMAC %\\tAccuracy drop\\nGroup-wise\\t13.7%\\t-0.43%\\nLayer-wise\\t11.6%\\t-0.40%\\n\\n-- Comparison\\n\\nThanks to provide related references. We\\u2019ll include them in related works. After thoroughly reading these 3 papers, the comparison table is shown as follows. In \\u201cWeight %\\u201d and \\u201cMAC %\\u201d, the less the better. All comparisons are conducted based on similar model structures for the same dataset. As seen from the table, integral pruning achieves the best computation reduction with marginal effects on model accuracies. \\nWhile the existing feature map pruning is friendly to conventional hardware platforms, our IP method needs specific accelerator designs to fully utilize the significantly reduced computation cost. We hope the IP method can inspire DNN accelerator designs, and indeed our hardware project is ongoing to fulfill the potential from the proposed IP algorithm.\\n\\nDataset\\t Model\\t Weight %\\tMAC %\\t Accuracy drop\\n*****************************************************\\nMNIST\\t MLP-3 (ours) 10%\\t 3.65%\\t +0.01%\\n\\t MLP-4 [1]\\t 15.6%\\t -\\t -0.06%\\n*****************************************************\\nImageNet\\tAlexNet (ours)\\t38.8%\\t 28.9%\\t +0.04%\\n\\t VGG-A [1]\\t 17.5%\\t 69.6%\\t +0.03%\\n\\t VGG-16 [2]\\t 94.4%\\t 24.8%\\t -3.93%\\n*****************************************************\\nCIFAR-10\\tResNet-32 (ours)\\t32.4%\\t 13.7%\\t -0.43%\\n\\t ResNet-164 [2]\\t84.8%\\t 52%\\t -0.5%\\n\\t\\t ResNet-164 [2] 48.5%\\t 36%\\t -1.0%\\n\\t ResNet-20 [3]\\t62.8%\\t -\\t -1.1%\\n*****************************************************\\n[1] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. ICCV 2017. \\n[2] Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. ECCV 2018. \\n[3] Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. ICLR 2018. \\n\\n-- There is no precise statement somewhere.\\n\\n1) The effects on accuracy are summarized in Table 1 at the beginning of Section 4. We\\u2019ll also include clear statements in subsections. \\n2) Fig.4 and Fig.5 are targeted for two discussion issues. Fig.4 is to show the advantage of the proposed dynamic activation pruning compared to the static solution. Fig.5 is to give an example for winner rates selection. We will have subsections to make them clearly separable. \\n3) We will double check to avoid any unclear statements and typos.\"}",
"{\"title\": \"Novelty and Comparison with Related Works\", \"comment\": \"Thanks for your reviews.\\n\\n-- Novelty \\n\\nOur two key contributions are 1) to explore the sparsity limit in both weight and activation and 2) the idea of dynamic activation masks to prune unimportant information in neuron responses. \\n\\nFirstly, the integration of static weight masks and dynamic activation masks reduces the computation cost significantly, which will give a great potential to specified accelerator designs as claimed in our conclusion. The second key contribution on activation pruning is our major novelty. The activation masks are easy to implement and greatly save computation cost. Furthermore, our proposed activation pruning method remedies the activation sparsity loss for weight pruned models, and it\\u2019s also feasible on non-ReLU functions. \\n\\n-- Comparison with Related Works\\n\\nThanks for your suggestion. We shall include the discussion about other compression techniques in our related works.\", \"we_have_two_concerns_here\": \"1) Our proposed activation pruning method is orthogonal to many compression techniques, such weight matrix decomposition, weight quantization. The reason why we focus on weight pruning is that we are aiming to explore the sparsity limit in DNNs. \\n2) On the other hand, our activation pruning approach can be aligned with the topic about feature map pruning. We add the comparison with some typical papers here. In \\u201cWeight %\\u201d and \\u201cMAC %\\u201d, the less the better. Our IPnets achieve the largest MAC reduction while model accuracy is basically not compromised. \\n\\nDataset\\t Model\\t Weight %\\tMAC %\\t Accuracy drop\\n*****************************************************\\nMNIST\\t MLP-3 (ours) 10%\\t 3.65%\\t +0.01%\\n\\t MLP-4 [1]\\t 15.6%\\t -\\t -0.06%\\n*****************************************************\\nImageNet\\tAlexNet (ours)\\t38.8%\\t 28.9%\\t +0.04%\\n\\t VGG-A [1]\\t 17.5%\\t 69.6%\\t +0.03%\\n\\t VGG-16 [2]\\t 94.4%\\t 24.8%\\t -3.93%\\n*****************************************************\\nCIFAR-10\\tResNet-32 (ours)\\t32.4%\\t 13.7%\\t -0.43%\\n\\t ResNet-164 [2]\\t84.8%\\t 52%\\t -0.5%\\n\\t\\t ResNet-164 [2] 48.5%\\t 36%\\t -1.0%\\n\\t ResNet-20 [3]\\t62.8%\\t -\\t -1.1%\\n*****************************************************\\n[1] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. ICCV 2017. \\n[2] Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. ECCV 2018. \\n[3] Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. ICLR 2018.\"}",
"{\"title\": \"Speedup Test and Winner Rates Searching Time\", \"comment\": \"Thanks for your reviews. We have some supplementary experiment results here, and hope these can address your concerns.\\n\\n-- Time comparison\\n\\n1) As claimed in the conclusion of this paper, the proposed integral pruning approach is targeted for application specific integrated circuit (ASIC) designs with efficient sparse matrix computation supports. Like the approach in [1][2] where the deep compression method inspires a specific accelerator design, the significant save of computation cost in our IPnets indicates the great potential of efficient ASIC designs in terms of energy and speed. \\n[1] Han, S., Mao, H. and Dally, W.J., 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.\\n[2] Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A. and Dally, W.J., 2016, June. EIE: efficient inference engine on compressed deep neural network. In Computer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual International Symposium on (pp. 243-254). IEEE. \\n\\n2) While we are working on the accelerator design to fully exploit integral pruning, the speedup on fully-connected (fc) layers benefiting from activation pruning is easy to be demonstrated on conventional computation platforms, such as a desktop CPU. This is because after activation pruning, the weight matrix of fc layers can be structured condensed by removing all connections related to the pruned activations. We take the last 3 fc layers of AlexNet on Imagenet dataset, and the experiment setup is shown in Table I. Batch size is 1 here, which is the case we care about most in real-time applications on edge devices. \\n\\nTable I. Experiment setup: \\nFramework\\t CPU\\t Memory\\tBatch size\\nTensorFlow 1.10\\t Intel i7-7700HQ\\t 8 GB\\t 1\\n\\nThe input activations can be pruned without compromising accuracy as shown in table II. Note that in Table II, the time per layer with activation pruning mainly comprises I) argpartition on input activation vector and II) matrix computation on the condensed layer. A 1.95x ~ 3.65x speedup is achieved. Time spent on argpartition to get winner activations is also included, which accounts for a small portion compared to the time spent on previous dense layers. \\n\\nTable II. Measurement results: \\nLayer\\tSize\\t Input acti %\\t Time per layer\\t argpartition (msec)\\t Speedup\\n\\t\\t Dense (msec)\\tWith acti pruning (msec)\\t\\t\\nFc1 \\t9216x4096 27.70%\\t 10.19975901\\t3.95335722\\t 0.879592419\\t 2.58 X\\nFc2 \\t4096x4096\\t10%\\t 4.544641018\\t1.24421525\\t 0.524742842\\t 3.65 X\\nFc3 \\t4096x1000\\t10%\\t 1.520430803\\t0.778268337\\t 0.397073746\\t 1.95 X\\n\\n-- Winner Rates Searching Time\\n\\n1) As discussed in Section 3.2, the winner rate per layer is empirically chosen that the accuracy drop is less than a certain threshold on a validation set. This criterion has been thoroughly verified by the experiments on various datasets and models as in Section 4. \\n2) The time spent on selecting winner rates can be negligible compared to training time. \\nBy wall-clock time measurements, scanning time over winner rates by using TITAN Xp with 12G memory is: \\nFor Fig.5 (a), 1 h 12 min; for Fig.5 (b), 20 min. \\nFor super deep NN structure such as ResNet-152, like distributed training, the winner rate scanning can be accelerated by GPUs working in parallel. On the other hand, the time spent on winner rate searching doesn\\u2019t hinder the inference time.\"}",
"{\"title\": \"Simple idea and lack of experiments\", \"review\": \"This paper proposes to compress the deep learning model using both activation pruning and weight pruning. Combining both sparsities, the MACs are significantly reduced.\\n\\nMy main concern is that there is no time comparison. The experiments only show the reduction in terms of the number of non-zeros in weights and activation as well as the MACs. Typically, to deal with sparse activations and sparse weights, there are some overhead computations such as computing indices. Also, dense matrix-matrix(vector) multiplications can be faster by using specially designed libraries. I would suggest the authors show the improvement for the proposed compression approach in terms of wall-clock time, in CPU, GPU or other hardware platforms. \\n\\nThe pruning method seems straight-forward to me. I am wondering how to choose the winner rate for each layer. It seems to take a quite long time to pick a set of winner rates for a deep neural network. \\n\\nThe paper is easy to read in general. However, it is not clear to me how such a compression approach can speed up the training or the inference of deep learning models in practice.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A simple network compression strategy combining weight and activation pruning.\", \"review\": \"The main contribution of the paper is an integral model compression method that handles both weight and activation pruning. Increasing the network weight and activation sparsity can lead to more efficient network computation. The authors show in the paper that pruning the network weights alone may result in a decrease in activation sparsity, which may not necessarily improve the overall computation. The proposed solution is a 2-stage process that first prunes the weights and then the activation.\", \"pros\": [\"The results show that the proposed method is effective in reducing the number of multiply-and-accumulate (MAC) compared to weight pruning alone. The improvements are consistent across multiple network architectures and datasets.\", \"It also shows that weight pruning alone leads to a slight increase in the number of non-zeros activation.\"], \"cons\": [\"A simple approach with limited novelty.\", \"Related work should include other compression techniques, such as low-rank approximation, weight quantization and varying hidden layer sizes.\", \"There is no comparison with other model compression techniques mentioned above.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Pruning of weights and activations\", \"review\": \"This article presents a novel approach called Integral Pruning (IP) to reduce the computation cost of Deep neural networks (DNN) by integrating activation pruning along with weight pruning. The authors show that common techniques of exclusive weight pruning does compress the model size, but increases the number of non-zero activations after ReLU. This would counteract the advantage of DNN accelerator designs (Albericio et al., 2016; Reagen et al., 2016) that leverage activation sparsity to speed up the computations. IP starts with pruning the weights using an existing technique to mask out weights under a threshold and then fine-tune the network in an iterative fashion to maintain the accuracy. After weight pruning, IP further masks out the activations with smaller magnitude to reduce the computation cost. Unlike weight pruning techniques that use static masks, the authors propose to use dynamic activation masks for activation sparsity in order to account for various patterns that are being activated in DNN for different input samples. In order to do this, the 'winner rate' measure for every layer (or for a group of layers in deep networks like ResNet32) is defined, to dynamically set the threshold for the generation of activation masks which eventually controls the amount of non-zero activation entries. The article empirically analyzes the sensitivity of activation pruning on validation data by setting different winner rates at every layer in DNN and decides upon a set of winner rates accordingly followed by an iteration of fine-tuning the network to maintain its performance. The authors show that their technique produced lower number of non-zero activations in comparison with the intrinsic sparse ReLU activations and weight pruning techniques.\\n\\nThe topic of reducing network complexity for embedded implementations of DNNs is highly relevant, in particular for the ICLR community.\\n\\nThe IP technique yields significantly reduced number of multiply-accumulate operations (MACs) across different models like MLP-3, ConvNet-5, ResNet32 and AlexNet and on different datasets like MNIST, CIFAR10 and ImageNet. They also depicted that pruning the activations with dynamic activation masks followed by fine-tuning the network yields more sparse activations and negligible loss in accuracy when compared against using static activation masks.\", \"strengths_of_the_paper\": [\"The motivation to extend compression beyond the weights to activations in order to support the DNN accelerator designs and the technical details are clearly explained.\", \"The proposed technique indeed produces sparser activations than intrinsic ReLu sparse activations and can also applied to any network regardless of the choice of activation function.\", \"The proposed technique is evaluated across different network architectures and datasets.\", \"The advantage of adapting dynamic activation masks over static ones is clearly demonstrated.\"], \"weaknesses_of_the_paper\": [\"The originality of the approach is limited because it is a relatively straightforward combination of existing techniques for weight and activation pruning.\", \"The \\\"winner rate\\\" measure is defined for every layer and should be explored over different values in order to find the equilibrium to reduce the number of non-zero activations and maintain the accuracy. This search of winner rates will become inefficient as the depth of the network increases. However, the authors used a single winner rate for a group of layers in case of ResNet-32 to reduce the exploration of search space but this choice might lead to suboptimal results.\", \"The authors compare the resultant number of MAC operations against numbers from the weight pruning technique. However, there also exist different works on group pruning techniques like Liu et al. (2017), Huang & Wang (2017), Ye et al. (2018) to prune entire channels / feature maps and thus yield more compact networks. Since these approaches prune the channels, they show a direct impact on the computation complexity and greatly reduce the computation time. A proper and fair comparison would be to compare the numbers of IP against such group pruning techniques. This comparison is highly important to highlight the significance of the approach on speeding up the DNNs and it is missing from the paper.\", \"At several locations in Section 4, e.g. Sec. 4.1, 4.3, and 4.4. there is no precise statement about the incurred accuracy loss (or no statement at all). The connection to Figures 4 and 5 is not immediately clear and should be made explicit.\"], \"references\": [\"Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming.\", \"Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers\", \"Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks.\"], \"overall_evaluation\": \"The authors integrate activation pruning along with the weight pruning and show that the number of MAC operations are greatly reduced by their technique when compared to the numbers of weight pruning alone. \\tHowever, I am not convinced regarding the reported number of MAC operations since the number of MAC operations of sparse weight matrices and activations would remain the same as the original models unless some of the filters/activation maps are pruned from the network. On the other hand, comparisons against group pruning techniques are highly necessary to evaluate the potential impact of the approach on speeding up of DNNs. My preliminary rating is a weak reject but I am open to revise my rating based on the authors response to the above stated major weaknesses.\", \"minor_comments\": [\"Caption of Fig. 4 should mention the task on which the results were obtained.\", \"There are occasional grammar errors and typos that should be corrected.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJGvns0qK7 | Bayesian Policy Optimization for Model Uncertainty | [
"Gilwoo Lee",
"Brian Hou",
"Aditya Mandalika",
"Jeongseok Lee",
"Sanjiban Choudhury",
"Siddhartha S. Srinivasa"
] | Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers. | [
"Bayes-Adaptive Markov Decision Process",
"Model Uncertainty",
"Bayes Policy Optimization"
] | https://openreview.net/pdf?id=SJGvns0qK7 | https://openreview.net/forum?id=SJGvns0qK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1eTGXOBgE",
"S1euErLZl4",
"S1l-AjjgxE",
"ryxPtNzhJN",
"BkeVWK4MyV",
"BJe0P_VMJV",
"Byl3qfTT0m",
"HJxJYMhcA7",
"H1xPAYSdAm",
"ryl2kYHdRX",
"SyxoCPr_0Q",
"B1lyqwS_C7",
"rkgKO-c637",
"Hklm9vec3m",
"ryg0dfCYnX"
],
"note_type": [
"official_comment",
"meta_review",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545073428721,
1544803632145,
1544760265362,
1544459391253,
1543813371899,
1543813222338,
1543520915802,
1543320183021,
1543162318781,
1543162084486,
1543161810884,
1543161735367,
1541411185312,
1541175179270,
1541165685930
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/Authors"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper720/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Reply\", \"comment\": \"Thank you for your feedback.\\n\\nBPO vs. Peng et al. [1]: \\nThe posterior belief distribution compactly summarizes the history of observations, and LSTMs can be interpreted similarly. The key difference between BPO and [1] is that BPO explicitly utilizes the belief distribution, while in [1] the LSTM must implicitly learn an embedding for the distribution. Because the belief distribution is directly provided to a BPO policy, a feed-forward policy is sufficient.\\n\\nBPO\\u2019s explicit use of a Bayes filter leads to data-efficiency and interpretability. As the learner does not need to infer the distribution from a long history of observations, the policy search can be more data-efficient. A Bayes filter also improves interpretability, as we can directly test how the agent adapts to different beliefs. However, such a Bayes filter may not be always readily available, e.g. when the latent parameters are only partially identifiable. We believe that combining a recurrent policy for unidentified parameters (analogous to [1]) with a Bayes filter for identified parameters (as in BPO) would be an interesting future direction to pursue.\", \"epopt_baseline\": \"The EPOpt baseline we provide is belief-agnostic, as we acknowledge in Section 7: \\u201c... we emphasize that neither (EPOpt nor UP-MLE) formulate the problems as BAMDPs.\\u201d The main reason for using this baseline is to analyze when belief-awareness is truly necessary. As we demonstrate through the MuJoCo experiments, belief-agnostic learners (EPOpt, UP-MLE) can perform quite well when active information-gathering is not critical. The adaptive version of EPOpt proposed in their paper assumes **multiple episodes of interaction** with the target domain with which they update the source distribution and retrain the agent, which is very different from our problem setting in which only **one episode of interaction** is allowed at test time. Since BPO can also be interpreted as belief-aware domain randomization, and it is similar to EPOpt if a per-step belief update is used as its input.\"}",
"{\"metareview\": \"The paper proposed a deep, Bayesian optimization approach to RL with model uncertainty (BAMDP). The algorithm is a variant of policy gradient, which in each iteration uses a Bayes filter on sampled MDPs to update the posterior belief distribution of the parameters. An extension is also made to POMDPs.\\n\\nThe work is a combination of existing techniques, and the algorithmic novelty is a bit low. Initial reviews suggested the empirical study could be improved with better baselines, and the main idea of the proposed method could be expended. The revised version moves towards this direction, and the author responses were helpful. Overall, the paper is a useful contribution.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Useful combination of existing techniques\"}",
"{\"comment\": \"If I understand correctly, the work considers a distribution over MDPs (i.e. belief state) and attempts to optimize policies over the belief state in a tractable way. The straightforward baseline seems to be the work of Peng et al. [1] which considers the same problem statement and trains an LSTM policy with PPO. The LSTM can be interpreted as implicitly performing inference and action selection simultaneously. Are the problem statements in the papers different? If not, how would the proposed BPO method compare against an LSTM policy?\\n\\nAlso, for the EPOpt baseline, is the belief distribution updated (on a per-time-step basis) and policy re-optimized? If not, it seems straightforward that a static algorithm would not perform as well as an adaptive algorithm, when the performances are measured either during the process of adaptation or after adaptation. Thus, a better comparison is EPOpt with a recurrent policy (similar to [1]) or EPOpt with feedforward policies but beliefs and policies are updated after getting data from the target domain.\\n\\n[1] Sim-to-real transfer of robotic control with dynamics randomization, Peng et al. ICRA 2018.\", \"title\": \"Important baselines missing\"}",
"{\"title\": \"Comparison with POMDP algorithms is a good addition\", \"comment\": \"I am glad that the authors compared their algorithm with POMDP algorithms Beetle and Perseus on the POMDP problems. It is positive that their method has similar performance to those algorithms. I am not still impressed with this work, as it is quite incremental, but I would not argue if it gets accepted.\"}",
"{\"title\": \"Thank you for suggested changes\", \"comment\": \"Thank you for reading the revised paper and suggesting further changes. In the next version, we will submit a supplementary video with the ant and the heat map evolution.\\n\\nYes, +x is the direction parallel to the lower edge. The task remains the same, i.e. to move toward +x, but the ant model is different from the nominal model (used for TRPO) in that one of the legs is 20% longer and another is 20% shorter. We will include this detail in the next draft.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your feedback. We believe that policy optimization is a promising direction for solving continuous Bayesian Reinforcement Learning problems which has not been sufficiently explored. Reviewer 2 agrees that the proposed approach is a \\\"solid combination of existing techniques....[that] is still worth trying and has been shown to scale to larger problems through the use of deep learning.\\\"\\n\\nBPO performs similarly to the adaptive policy gradient method (UP-MLE) on Chain and MuJoCo BAMDP problems. As we observe in Section 8, \\\"if all actions are informative (as with MuJoCo, Chain) and the posterior belief distribution easily collapses into a unimodal distribution, UP-MLE provides a lightweight alternative.\\\" However, BPO significantly outperforms UP-MLE in the Tiger and LightDark domains, which are POMDP problems where not all actions are informative. The success in POMDP problems highlights that the BPO agent is indeed belief-aware. BPO generalizes to POMDPs, which we discuss in Section 6.\", \"epopt_and_up_mle_are_not_bayesian_algorithms\": \"\\\"Although EPOpt and UP-MLE are the most relevant algorithms which utilize batch policy optimization to address model uncertainty, we emphasize that neither formulate the problems as BAMDPs\\\" (Section 7). However, as we point out in Section 2, many state-of-the-art BAMDP or POMDP solvers are designed for discrete state-action spaces, which cannot be readily extended to continuous spaces (e.g. the MuJoCo domain). We have focused on comparisons with algorithms that deal with continuous spaces, while providing POMDP baselines for context wherever possible. In the Tiger domain, BPO nearly matches the performance of SARSOP (Kurniawati et al., 2008). The BPO trajectory on LightDark nearly matches the optimal trajectory drawn in Figure 1 of (Platt et al., 2010). In addition, we have run further experiments that compare BPO with other discrete Bayesian reinforcement learning algorithms (BRL) on the Chain domain (Poupart et al., 2006), which we discuss below. These experiments validate that our BPO performs as well as state-of-the-art BRL or POMDP solvers in discrete domains, while opening a promising direction in continuous domains.\\n\\nOur Chain domain corresponds to the \\\"tied\\\" version of Chain in Poupart et al. When we match their setup to compare results (and report 95% confidence intervals rather than standard deviation), BPO performs similarly to the discrete BRL and POMDP algorithms.\\nBPO, Chain-10: 3645.6 \\u00b1 5.4\\nBeetle (discrete BRL): 3650 \\u00b1 3.6\\nPerseus (discrete POMDP solver): 3661 \\u00b1 2.4\\n\\nWe also add a new experiment for the more challenging \\\"semi-tied\\\" version, where the slip probabilities for the two actions are estimated independently. Again, BPO performs comparably to discrete BRL and POMDP algorithms.\\nBPO, Chain-10: 3649.1 \\u00b1 7.8\", \"beetle\": \"3648 \\u00b1 3.7\", \"perseus\": \"3651 \\u00b1 2.8\\nMC-BRL, K=10 (Wang et al., 2012; continuous BRL): 3216 \\u00b1 64\", \"encoding_did_not_improve_performance_significantly\": \"We respectfully disagree with the assessment that \\\"the encoding did not improve the performance significantly,\\\" which seems to conflict with the later observation that \\\"the proposed algorithm, especially with encoders, is quite robust w.r.t. the discretization.\\\" The experiment comparing performance on different levels of discretization on the Chain environment is specifically designed to evaluate the usefulness of the encoding. Figure 2b demonstrates that the separate encoder networks improve the performance of BPO compared to BPO- when the latent space is finely discretized. We would welcome further clarifications to better understand this concern.\", \"updating_distribution_over_mdps\": \"The transition functions of the MDPs are parameterized by latent variables. The Bayes filter maintains a posterior distribution over the latent parameters, which is equivalent to a distribution over MDPs. In Section 5, we further describe how we implement a Bayes filter for the uniformly discretized latent parameter space and mention that other Bayes filters such as the extended Kalman filter can be used for Gaussian belief distributions.\", \"improve_structure_of_paper\": \"Thank you for your helpful suggestions about which sections to pare down. We will incorporate them into the next draft.\\n\\n\\nPoupart, Vlassis, Hoey, Regan. An Analytic Solution to Discrete Bayesian Reinforcement Learning. ICML 2006.\\nWang, Won, Hsu, Lee. Monte Carlo Bayesian Reinforcement Learning. ICML 2012.\\nPlatt, Tedrake, Kaelbling, Lozano-Perez. Belief space planning assuming maximum likelihood observations. RSS 2010.\"}",
"{\"title\": \"Combination of several existing methods + none quite convincing experiments\", \"review\": \"Summary:\\n\\nIn this paper, the authors propose a policy gradient algorithm for solving a Bayes-Adaptive MDP (BAMDP). At each iteration, the algorithm samples several MDPs from the prior distribution and simulates a trajectory for each sampled MDP. During the simulation, the algorithm uses a Bayes filter to update the posterior belief distribution at each time step. Finally, the algorithm uses the sampled trajectories and update the policy using the TRPO algorithm. \\n\\nThe authors propose to pass the state and belief through separate encoders, to reduce their dimensions, and then put them together and give them to the policy network. Although the experiments show that the encoding did not improve the performance significantly, except in the Lightdark problem. \\n\\nThe authors show that their algorithm can also be used to solve POMDPs by replacing the state-belief pair with just belief. Basically turning a POMDP to a belief state MDP and then applying the algorithm. They evaluate their algorithm in two POMDP problems, one discrete and one continuous, in both their algorithm achieves a reasonable performance. \\n\\nA tricky part of the algorithm is how to define a Bayes filter for continuous latent states. This is crucial in updating the posterior after each observation. The way the authors handle this is by discretization, and how the discretization should be done (high or low resolution) is a hyper parameter. Although the experiments indicate that the proposed algorithm, especially with encoders, is quite robust w.r.t. the discretization.\", \"comments\": [\"The idea behind the algorithm proposed in the paper is quite simple. It is a combination of Bayesian optimization (sampling several MDPs from the prior), using a Bayes filter to update the belief, and a policy gradient algorithm (TRPO) to estimate the gradient and update the policy parameter. The only challenges are 1) the design of the Bayes filter, in particular when the latent state is continuous, in which the idea used in the paper is very simple, discretization, and 2) dealing with potentially high dimensional state-belief pair, which was handled by the encoders.\", \"The structure of the paper could be improved significantly. Four pages have been dedicated to the preliminaries and related work, and another four pages to the experiments. This leaves only less than two pages for the algorithm. While I think a comprehensive discussion of the experiments is quite helpful, I found the preliminaries and related work too long. I even think that the experiments could have been written better. There are parts that have been explained too much and parts that are not clear or left for the appendix. With a better structure, the algorithm could have been explained better. I personally would like to see more discussion on how the distribution over MDPs is updated.\", \"I did not find the experiments very convincing. In BAMDP problems (Chain and MuJoCos), the proposed algorithm performs similarly to the adaptive policy gradient method. We only see improvement in the POMDP tasks (Tiger, Lightdark), which I think the main reason is that the algorithms selected for comparison are not the right algorithms for POMDPs. For example, many different algorithms have been used to solve Tiger (or other discrete POMDPs) in the POMDP literature, and I do not see any of them in the paper.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reply\", \"comment\": \"I looked at the revised paper, and think the authors did a good job at incorporating our feedback. I particularly like Section 5.\\n\\nFigure 4 in general is interesting, but it's still difficult to infer how beliefs change over time or how the agent makes decisions based on it (which, admittedly, is not easy to show on paper - if the authors ever have time I highly recommend making a video with the ant moving and the heat map next to it :))\\n\\nIn 4a, is the +x direction parallel to the lower edge of the image? How did the task change? The figure can potentially give the reader some insight into how the policies adapt given some environment change; however I think some details are missing.\"}",
"{\"title\": \"Revision addresses the comments; additional clarification below\", \"comment\": \"Thank you for your feedback. In addition to the clarification on belief representation (Section 4), the newly added section on Bayes filter (Section 5), and newly added figures (Figure 2 and 4) based on your suggestions, we would like to answer some of the concerns you have raised.\\n\\n\\nRegarding the assumption that the environment models are known and/or that the latent space can be discretized:\", \"we_believe_that_there_are_largely_three_classes_of_robotics_problems_where_bamdps_can_be_applied\": \"1) The real-world dynamics can be reasonably approximated by simulators (or closed-form dynamics equations).\\n2) Although the simulator differs from the real-world dynamics, leveraging domain randomization while training a robust policy successfully transfers to the real world.\\n3) The real-world dynamics must be learned from scratch (possibly in a nonparametric manner).\\n\\nMany existing approaches in robust RL aim for 1 and 2. Some example scenarios are varying spring coefficients modeling the wear and tear of robotic legs or manipulation of a set of objects whose mass or shape can be parametrized. In these cases, we believe that our approach will produce better policies than belief-agnostic approaches.\\n\\nFor case 3, we would like to note that our algorithm can in fact learn a policy even when there is little prior dynamics knowledge from simulators. For discrete BAMDPs, independent Dirichlet distributions for p(s\\u2019|s, a) are a common choice for uninformative priors (Duff & Barto, 2002; Kolter & Ng, 2009). For continuous space transition functions, we can maintain a joint distribution of continuous random variables e.g. Gaussian processes. Using these flexible priors requires no change to Algorithm 1, although the step of sampling an MDP during training would now involve sampling from the Dirichlet or Gaussian process. In the case of GP, the input to the policy network has to be a fixed-size representation of the GP posterior distribution. This would be an interesting future work.\\n\\nIn summary, our algorithm can leverage prior knowledge from simulators (cases 1 and 2), but this does not limit it from learning almost from scratch (case 3).\"}",
"{\"title\": \"Revision addresses the comments; additional clarification below\", \"comment\": \"Thank you for your feedback. In addition to the clarification on belief representation (Section 4) and the newly added section on Bayes filter (Section 5), we would like to answer some of the concerns you have raised.\", \"root_sampling_results_in_sampled_models_that_are_fixed_in_every_simulation\": \"This is indeed the correct realization of the BAMDP framework, where the underlying model is fixed but unknown. Our algorithm addresses this by fixing the sampled model for the whole episode. Since the true model is hidden from the agent, it maintains a belief over the possible models. After each belief update, the agent\\u2019s belief over the model changes, but the actual underlying model remains the same. A Bayes-optimal agent learns to act such that the uncertainty in the belief distribution reduces to the degree necessary for maximal long-term reward.\\n\\nTRPO on Ant performs well on certain cases but poorly on corner cases. The reason why BPO seems to have only marginal gain in this case is due to the particular four-legged nature of Ant, which allows a mean-model agent to walk reasonably under small geometric variation. The visualization of a corner case is added in Figure 4.\", \"computational_complexity_of_discretization\": \"We agree that increasing the belief discretization level increases the time required to perform posterior updates at each timestep. Ultimately, this is an implementation detail of the black-box Bayes filter. However, we have empirically found that fine discretization of the continuous latent state space may be unnecessary: BPO produces high-performing agents even with a coarse discretization. For MuJuCo problems, we outperform the other baselines with only 25 bins to discretize the latent parameter space. For the Chain problem, the discretization with 10 bins is as good as or slightly better than 1e2 or 1e3 bins. This implies two things: 1) our algorithm is robust to approximate beliefs, and 2) the agent only needs the belief to be sufficiently accurate to inform its actions. Due to these properties, we believe that more computationally-efficient approximate Bayes filters can be used without significantly degrading performance.\", \"rnn\": \"As you suggest, a recurrent policy could learn to act with respect to a history of observations. In our case, the history of observation is encoded by the belief, so TRPO in the belief space has as much information as an RNN. The use of RNN for jointly training the Bayes filter and the policy could certainly be effective, as proposed in (Karkus et al., 2017).\"}",
"{\"title\": \"Revision includes experimental details.\", \"comment\": \"Thank you for your feedback.\\n\\nWe have added experimental details regarding the policy network (Section 6) and training parameters (Appendix A3). The model parameter ranges for MuJoCo BAMDP problems are described in Appendix A2.\"}",
"{\"title\": \"Revised paper with better visualization and additional technical details submitted\", \"comment\": \"We thank all reviewers for their thoughtful feedback and comments.\\n\\nOur paper provides a scalable RL algorithm for addressing model uncertainty. Our algorithm is a solid combination of Bayes filter, Monte Carlo methods, and batch policy optimization algorithms [Reviewer 2], which we extend with a novel policy structure [Reviewer 3] to address the challenge of large latent state spaces. As pointed out by all three reviewers, our experiments demonstrate promising results for continuous BAMDP and POMDPs.\\n\\nOur algorithm depends on a fixed-size parameterization of the continuous latent state space, e.g. a mixture of Gaussians. When such a specific representation is not appropriate, we can choose a more general uniform discretization of the latent space. \\n\\nDiscretizing the latent space introduces an additional challenge from the curse of dimensionality. It is crucial that an algorithm is robust to the size of the belief representation. Our belief encoder (Section 4) achieves the desired robustness by learning a compact representation of an arbitrarily large belief representation. We empirically verify that the belief encoder makes our algorithm more robust to large belief representations than the one without the belief encoder (Figure 2).\", \"here_is_a_summary_of_our_revisions\": \"An additional figure that compares the relative performance of BPO with other approaches (Figure 2a). The table of numerical results has been moved to the Appendix.\\nAn additional figure that demonstrates the importance of the belief encoder (Figure 2b)\", \"reviewer_1\": \"Reference to DVRL added in Section 3.\", \"reviewer_3\": \"Experimental details about the policy network and training parameters (Section 7, Appendix).\"}",
"{\"title\": \"Review\", \"review\": \"In this paper, the author proposed to utilize a novel policy structure and recent batch policy optimization methods such as PPO or TRPO to solve Bayes-Adaptive MDP (BAMDP) and Partial Observable MDP(POMDP) problems. The author verified the proposed method on discrete and continuous POMDP and BAMDP benchmarks compared with other baseline methods.\\n\\nThe main part of the paper is trying to explain Bayesian RL and the relationship between BAMDP and POMDP, and several related work. There is only a half page that explains the main idea of the proposed method, and it seems that the author combines several existing techniques and utilize deep learning to solve BAMDP and POMDP problems.\\n\\nThe detail of the experiment is not clarified explicitly, such as the structure and size of the policy, training details of the BPO, and detail parameters changed to formulate BAMDP for Mujoco environments.\\n\\nThe paper strikes me as a valuable contribution once the detail of the experiments are addressed, but personally I am not sure that whether the novelty of this paper is enough for the main conference track.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Experiment results are not much convincing.\", \"review\": \"Summary: This paper proposes a policy optimization framework for Bayesian RL (BPO). BPO is based on a Bayesian model-based RL formulation. Using a Bayesian approach, it is expected to have better trade-off between exploration and exploitation in RL, and be able to deal with model uncertainty as well. Experiments are done on multiple domains consisting both POMDP planning tasks and RL.\\n\\nIn general, the paper is well written. Related work are thoroughly discussed. In my opinion, the proposed idea is a solid combination of existing techniques: Monte-Carlo sampling (step 3), Bayes belief update, and policy gradient in POMDP (G(PO)MDP). However, this combination is still worth trying and has been shown to scale to larger problems through the use of deep learning.\", \"i_have_some_following_major_concerns_about_the_paper\": [\"Root sampling (step 3 in Algorithm 1) would result in sampled models that are fixed in every simulation. In a pure nature of Bayes RL, after each update at new observation (step 11: belief update), the model distribution already changes. Thus how does this Algorithm can guarantee an optimal solution for BAMDP? can the authors have more discussions on this point? Does this explain why TRPO (using a mean model) can perform comparably to BPO in Ant?\", \"Belief representation is based on a Bayes filter which requires discretization. Finely discretized belief would increase the complexity and computation dramatically with the dimension of the latent space. This would result in very slow SIMULATE steps, especially for a long-horizon problem, let alone further computation for BatchPolicyOptimization.\", \"I wonder how TRPO using RNN would perform in this case, instead of using a wrong starting model (an average model)?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Solid\", \"review\": \"Evaluation:\", \"this_is_a_solid_paper\": \"The idea is clear, it is well communicated and put into context of the existing literature, and the results are promising. The experiments are well chosen and illustrate the method well. The connection between the chosen setting (BAMDPs) to POMDPs is explained well and explored in the empirical evaluation as well. I think that the methods section could go into a bit more detail, and the underlying assumptions that the authors make could be discussed more critically.\", \"summary\": \"This paper looks at Bayes-Adaptive MDPs (BAMDPs) in which the latent parameter space is either\\n- a discrete finite set or\\n- a bounded continuous set that can be approximated via discretization.\\nConsequently, the authors choose to represent the belief as a categorical distribution, which can be represented by a vector of weights.\\nThey further assume that the environment model is known. Hence, the posterior belief can be computed exactly.\\nIf I understand correctly, the main contribution is that the authors represent the policy as a neural network and train it using a policy gradient algorithm.\\nThis is a good first step towards scalable Bayesian policy optimisation.\", \"main_feedback\": [\"In the Introduction, first paragraph, you say one of the aspects of real-world robotics is that there's \\\"(1) an underlying dynamical system with unknown latent parameters\\\". I would argue that the dynamic system itself is typically also unknown, including how it is parametrized by these latent parameters. I think it is important to point this out more explicitly in the introduction (it is mentioned in sec 2 and 5, but maybe it's worth mentioning it in 4 again as well): for the problems that you look at, you assume that the form of the transition function is known (just not its parameters phi).\", \"In the main methods section (4), it would be nice to see some more detail about the Bayes filter. Can you write out the distribution over the latent parameters, and write out how the filtering is done? Explain how to compute the normalising constant (and mention explicitly why this is possible for your set-up, and why it would be infeasible if the latent space cannot be discretized). How exactly is the posterior distribution represented and fed to the policy? Seeing this done explicitly in Section 4 (even if it repeats some things that are explained in 2) would help someone that is interested in (re-)implementing the proposed method.\", \"I would like to see a more critical discussion in Section 7 about the assumptions that the authors make: that the environment models are known, and that the latent space can be discretized. How realistic are those assumptions (and in which kind of real-world problems can we make them), and what are ways forward to drop these assumptions?\"], \"other_comments\": [\"Introduction: Using an encoder for the state/belief is an implementation choice, and (as I see it) not part of the main contribution. I would focus on explaining the intuition behind BPO in the introduction, and only mention the architecture choice as a side note.\", \"Related Work: The authors might be interested in the recent work of Igl et al. (ICML 2018, \\\"Deep Variational RL for POMDPs\\\"), who approximate the belief in a POMDP using variational inference and a particle filter.\"], \"significance_for_iclr\": [\"In the light-dark experiment, the authors visualise the belief that the agent has at every time step. It would have been nice to see an analysis of how exactly the belief looks also for maybe 1-2 other experiments, and how (when) the agent makes a decision based on this. This could replace Table 2 (which I guess should be called Figure 2?), which I did not find very insightful.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BkfPnoActQ | Towards Consistent Performance on Atari using Expert Demonstrations | [
"Tobias Pohlen",
"Bilal Piot",
"Todd Hester",
"Mohammad Gheshlaghi Azar",
"Dan Horgan",
"David Budden",
"Gabriel Barth-Maron",
"Hado van Hasselt",
"John Quan",
"Mel Večerík",
"Matteo Hessel",
"Rémi Munos",
"Olivier Pietquin"
] | Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games. We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games. A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of 0.999 (instead of 0.99) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states. When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters. | [
"Reinforcement Learning",
"Atari",
"RL",
"Demonstrations"
] | https://openreview.net/pdf?id=BkfPnoActQ | https://openreview.net/forum?id=BkfPnoActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgVKdvgxN",
"S1xQQlO2Tm",
"BJezQ93FTQ",
"rJeR7yRw6Q",
"S1lsrHMvTQ",
"BkeJsysI6X",
"rkgcVWA-6X",
"SylXBw1O27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544743036169,
1542385690769,
1542208025530,
1542082342466,
1542034754875,
1542004631451,
1541689649934,
1541039930585
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper719/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper719/Authors"
],
[
"ICLR.cc/2019/Conference/Paper719/Authors"
],
[
"ICLR.cc/2019/Conference/Paper719/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper719/Authors"
],
[
"ICLR.cc/2019/Conference/Paper719/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper719/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper719/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a combination of three techniques to improve the learning performance of Atari games. Good performance was shown in the paper with all three techniques together applied to DQN. However, it is hard to justify the integration of these techniques. It is also not clear why the specific decisions were made when combining them. More comprehensive experiments, such as a more systematic ablation study, are required to convince the benefits of individual components. Furthermore, it seems very hard to tell whether the improvement of existing approaches, such as Ape-X DQN, was from using the proposed techniques or a deeper architecture (Tables 1&2&4&5). Overall, this paper is not ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Combination of three techniques.\"}",
"{\"title\": \"Thanks for the review!\", \"comment\": \"We would like to thanks the reviewer for their comments.\\n\\nConcerning the Bellman operator\\nWe experimented with linear reward scaling extensively. Because there is no universal scaling constant that stabilizes training on all games, we did try various methods of deriving a scaling factor from demonstrations (mean/max reward, mean/max discounted return, etc.). We failed to find an algorithmic way of linearly scaling the reward that works consistently on all 42 games. Some games (e.g. Seaquest) have exponential reward structures, which are difficult to address using constant linear reward scaling. This is why we looked into non-linear transformations that - like linear transformations - preserve the optimal policy (at least in deterministic MDPs).\\n\\nOther transforms will work. As our theoretical analysis shows, the transform only needs to be monotonic in deterministic MDPs and h, h^-1 being Lipschitz will still preserve convergence in stochastic MDPs.\\n\\nUnfortunately, a smoothed logarithmic transform h(z) = sign(z) * log(|z| + 1) does not have a Lipschitz continuous inverse (h^-1(z) = sign(z) * (exp(|z|) - 1)) and when adding the eps * x term to ensure Lipschitz continuity of the inverse, we can no longer find a closed-form representation of the inverse.\\n\\nWe would like to highlight that using DQN with unclipped rewards on Atari is long-standing problem. Previous approaches such as POPART are more complex and require careful engineering. Our transformed Bellman operator is easy to understand, easy to implement, and works reliably in practice. While the contribution might seem straightforward, it addresses a long-standing problem with a novel solution that works even when no expert data is available [see e.g. https://openreview.net/forum?id=r1lyTjAqYX ].\\n\\nConcerning the TC loss\\nThe reviewer rightfully points out the trade-off between training speed and stability. For example, the plots for Private Eye in Fig. 3 clearly show that training without the TC loss is faster (albeit potentially unstable). We don\\u2019t actively balance training speed and stability as our results show that final performance is not impacted by the TC loss. Hence, we keep experiments running a bit longer.\\n \\nWe don\\u2019t introduce weights for the three losses (TD, TC, Imitation). The objective function L given in Sec. 3.4 is the one implemented in the code used to obtain all results.\\n\\nWe have not experimented with longer update cycles for the target network. This is an interesting idea. We will try to run additional experiments. However, it is unlikely we will have the necessary capacity to do so before the end of the rebuttal period.\\n\\nConcerning DQfD\\nOur changes are primarily technical and port the single-actor DQfD algorithm to a distributed setup. Rather than \\u201cleading to better results\\u201d by being novel ideas, these changes (continuous expert replay, disjoint prioritization, no pretraining) are necessary when running DQfD in a distributed setup. Please see point 5 in our response to AnonReviewer4 below to see why simply running DQfD with many actors is a poor choice.\\n\\n=====\", \"edit\": \"Made the link clickable.\"}",
"{\"title\": \"Thanks for the review!\", \"comment\": \"We would like to thank the reviewer for their comments.\\n\\n1. The reviewer argues that \\u201cApe-X DQfD [using our hyper parameters] has worse performance than Ape-X DQN\\u201d using the original hyper parameters. This statement is not unconditionally true and depends on the evaluation metric. We want to improve consistency and evaluate this in terms of the number of games on which our algorithm exceeds average human performance. Using this metric, Ape-X DQfD beats Ape-X DQN 39 to 35.\\n\\nWe agree that we should have made this clearer when stating our claims. We will address this issue in the revised version.\\n\\nWe do agree with the reviewer that changing hyper parameters should be avoided. However, when choosing a new research objective (consistency across the benchmark rather than mean performance), one might need to adjust the problem setting correspondingly. This is why we not only list the hyper parameters we have changed in Table 3, but for each hyper parameter we explain why we changed it.\\n\\n2. We disagree with the reviewer who believes that undoing reward clipping makes the problem \\u201cunnecessarily harder\\u201d. In the paper, we illustrate using the game Bowling how reward clipping makes it impossible to learn good policies (similarly, Pitfall suffers as abundant but small (~ -1) negative rewards immediately overshadow the sparse but big (~ 4000) rewards). We don\\u2019t want to just chase higher mean/median scores by simply getting better on games previous algorithms already perform super-human on. We want to build an algorithm that can solve more games without having to custom fit hyper parameters to individual games. As our examples show, such an algorithm must process unclipped rewards.\\n\\nThe reviewer, furthermore, insists on Ape-X DQN being better than Ape-X DQfD with gamma=0.999. However, this score \\u201cApe-X DQN* (c, 0.999)\\u201d was obtained using reward clipping. When comparing using the same hyper parameters (\\u201cApe-X DQN* (u, 0.999)\\u201d), Ape-X DQN performs worse in all four metrics presented.\\n\\n3. Studying the \\u201ceffect of some ad-hoc transformations\\u201d on rewards rather than the value function could be indeed interesting but it is not the main purpose of the paper. We may refer the reviewer to the original DQfD algorithm (Hester et al. 2017) that uses such a transformation.\\n\\n4. We agree that an ablation study on 42 games would be more interesting. However, doing the leave-one-out study on all games is too compute intensive at the moment (For the actors alone we\\u2019d need 5 ablations x 3 seeds x 42 games x 128 actors x 140h = 11,289,600 CPU hours). Choosing a subset of games for the ablation study is in line with previous work (e.g. [Horgan et al. 2018]). We carefully chose the six games to be as informative as possible (and explain how we chose them), which is why we do think that our ablation study provides sufficient information to understand the effect of each contribution.\\n\\n5. Simply adding more actors to the original DQfD algorithm is a poor choice of baseline. First, DQfD uses a single replay buffer and relies on prioritization to select expert transitions. This works fine when there is a single actor process feeding data into the replay buffer. However, in a distributed setup (with 128 actors), the ratio of expert to actor experience drops significantly and prioritization no longer exposes the learner to expert transitions. Second, DQfD uses a pre-training phase during which all actor processes are idle. This wastes a lot of compute resources when running a decent number of actors. Many of our contributions help eliminate these shortcomings.\"}",
"{\"title\": \"Combining three simple and orthogonal ideas achieves good results on Atari game\", \"review\": \"The paper proposed three extensions to DQN to improve the learning performance on Atari games. The extensions are transformed Bellman update, temporal consistency loss and expert demonstration. These three extensions together make the proposed algorithm outperform the state-of-the-art results for Atari games. However, these extensions are relatively straightforward and thus the technical contribution is lean.\\n\\nThe first extension, transformed Bellman update is similar to reward scaling. While reward scaling is a linear transform, this paper proposed a non-linear transform. It would be great if the paper can compare the transformed Bellman update with reward scaling: For example, normalize the reward based on the best expert performance. In addition, the proposed transformation seems a bit ad-hoc. I feel that many similar transforms will work. For example, would a log transform work? It would be impressive if this transform is learned simultaneously, instead of manually crafted.\\n\\nThe second extension, TC loss, is a double-edge blade. While it stabilizes the learning process, it can slow down the learning. It is not clear how this paper balances these two? How much weights are placed for the TC loss. I feel that the functionality of the TC loss is somewhat similar to the target network. Will reducing the frequency of updating the target network achieve the similar effect?\\n\\nThe third extension, bootstrapping from demonstration data, can greatly help the learning process. Although the paper enumerates three differences to DQFD, I am not convinced that these differences lead to significant better results.\\n\\nOverall, this paper is clearly written. It achieves good results. The contributions are three orthogonal extensions to DQN, which are relatively straightforward. For above reasons, I do not feel strongly about the paper. I am OK if the paper is accepted.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you for the review!\", \"comment\": \"We would like to thank the reviewer for their thorough review and comments.\\n\\nConcerning the strengths and weaknesses highlighted by the reviewer, we agree that the three techniques employed may seem orthogonal. We choose to present them in a single paper for two reasons: \\n\\nFirst, adding expert replay can make training significantly more unstable. Consider the plots of Ms. Pacman in Fig. 3 and Fig. 6. Even when increasing the discount factor and removing clipped rewards Ape-X DQN does mostly fine (training collapses after >100h). However, when continuously replaying high-rewarding episodes (total reward ~55k points), training becomes unstable without the transformed Bellman operator and the TC-loss. Hence, side-stepping the exploration problem and continuously exposing the learning algorithm to high-rewarding trajectories amplifies the instability problems. \\n\\nThe second reason concerns our goal with this paper. We want to improve consistency and generality. Previous algorithms either focused on hard-exploration games exclusively or only optimized the mean/median scores (where a few ill-performing games don\\u2019t hurt too much). With this paper, we show that it is possible to perform well across the board. Our algorithm achieves good scores on hard-exploration games while also performing well in the mean/median benchmark. \\n\\nRegarding the temporal consistency loss, we do not/cannot argue that this loss can solve Baird\\u2019s counter-example. It merely improves instability in our application by restricting the changes in network predictions (similarly, Durugkar\\u2019s constrained gradient technique does not solve Baird\\u2019s counter-example but only avoids divergence).\", \"concerning_the_questions\": \"1. The highest episode reward idea is indeed something specific to the deterministic nature of Atari and helps on very hard exploration games. As highlighted by the reviewer, this should not be used in a very stochastic environment.\\n\\n2. The reviewer is correct in that the first part of the proposition is trivial (the case when h is linear). However, we find the the second part is less obvious (i.e. the fixed point of the transformed operator is the transformed fixed point of the original operator). In Sec. C (appendix), we present a more general investigation of the case when h is Lipschitz and the MDP is stochastic.\\n\\n3. The definition of \\u201cbadly\\u201d depends on the notion of optimality. For example, the standard Bellman operator wouldn\\u2019t score many points on Atari when the discount factor of the MDP is low. So, instead of finding MDPs where the operator performs badly, we focus on MDPs where our operator behaves differently (i.e. the learned policy is different). Proposition 3.1 implies that such an MDP needs to be stochastic and we can construct an example.\\nConsider a bandit-like episodic MDP with two states s_start, s_terminal and two actions a_1, a_2. After taking an action a in s_start, a reward r(s_start, a) is received and the episode terminates in s_terminal. We can disregard the discount factor because of the unit episode length. The reward distributions are as follows:\\nr(s_start, a_1) = 2\\nr(s_start, a_2) = Uniform({1, 3.1})\", \"the_standard_bellman_operator_learns_the_action_value_function\": \"Q*(s_start, a_1) = 2, Q*(s_start, a_2) = 2.05.\", \"our_operator_on_the_other_hand_learns_the_action_value_function\": \"Q*(s_start, a_1) = 0.752.., Q*(s_start, a_2) = 0.740..\\n\\nHence, the greedy-policy learned by the standard operator selects a_2 and the greedy-policy learned by our operator selects a_1.\"}",
"{\"title\": \"Interesting method but needs more experiments to support\", \"review\": \"This paper propose a method that aims to solve the following 3 problems: sensitivity to unclipped reward, robustness to the value of the discount factor, and the exploration problem.\\n\\nPros\\nThis paper propose a transformed Bellman operator, and the author proved its convergence under some deterministic MDP conditions. The proposed transformed Bellman operator is interesting since that is analogous to some variance reduction techniques in the policy gradient literature. In the value based method literatures, those techniques have not been well studied.\\n\\nCons\\nI think the main issue of this paper is the experiments can not fully support the advantage claim of the proposed method. \\n\\t1. With the author's hyper-parameters, the proposed method (Ape-X DQfD) has worse performance than the baseline Ape-X DQN, with the original hyper parameter of Ape-X DQN (Table 1). The author has a version of the baseline with the same hyper parameter as the proposed method, but the modified one is worse than the original baseline, which is not satisfactory. I think in general we should try to keep the original hyper parameter especially the original performance is better. \\n\\t2. With the same hyper parameters, the performance of the proposed method Ape-X DQfD is better than Ape-X DQN*(with reward clipping, gamma=0.999) with human starts but worse with no-op starts (Table 2). On the whole, their performance I would say, is similar. That makes reader questions about the utility of not use reward clipping, since without reward clipping, we did optimize the true objective, but the final performance is sometimes better and sometimes worse. I am afraid that undo the reward clipping is making the problem unnecessarily harder. \\n\\t3. The transformed Bellman operator transforms the Q function by a contraction. It's interesting to see what kind of effect of some ad-hoc transformations on the reward will behave. Given the particular function form the author have used, it's especially interesting to see how this transformation: r' = sgn( R) sqrt(abs(r )) will affects the performance. \\n\\t4. The authors ablates the method on 6 games out of the 42. However, it's mostly qualitative, rather than quantitative. I think it would be more convincing if the leave-one-out experiment could be carried out on all 42 games. \\n\\t5. The author combines Ape-X DQN with a modified version of DQfD, as mentioned in Section 3.4. For a fair comparison, I think there should be a corresponding modified version of DQfD as a baseline. \\n\\nI think the author proposed an interesting approach, however, the experiment section, especially the ablation section could be improved. It's hard to tell how much the transformed Bellman operator and the temporal consistency loss contributes on an average case, based on the current results. If the author could provide more information, I'm willing to change my review.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Educated guess\", \"review\": \"The paper reads well, proposes well motivated modifications to existing methods, and gets what appears to be strong results. I have no experience in RL, and although I read the paper I don't feel able to make meaningful comments.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Three loosely related methods which are all well justified\", \"review\": \"This paper proposes a few changes to DQN training, two of which are aimed at reducing instability, and one is aimed at improving exploration (expert trajectories). Because all of these changes are well justified and the experiments are fairly thorough, I recommend acceptance. My main reservation is that the ideas presented are not very strongly thematically linked. The presence of ablation studies compensates for this to some extent.\", \"i_will_summarize_each\": \"\", \"transformed_bellman\": \"This applies a rescaling function (it's basically a monotonically increasing version of the sqrt(x) function) to the Q-function and applies the inverse of the function to the max Q-value of the next state (such that the contracting effect h-function is not \\\"applied\\\" multiple times when doing the TD backup).\", \"temporal_consistency\": \"This encourages the \\\"next state\\\" after where the TD-update is applied to not change too much. This addresses a problem discussed in (Durugkar 2018). I think the intuition here is that the state which follows the state with the TD update may be visually similar, but it does not impact the value in the past states, so its value function should not have a highly correlated change with the previous state's change in value function.\", \"dqfd\": \"Storing an expert replay buffer and an actor replay buffer. The expert replay buffer is fixed and the actor replay buffer stores the most recent \\\"actor processes\\\". Train with both a supervised imitation loss (only for the highest return episode) and the original TD loss. Additionally, the pre-training phase is removed and the ratio of expert-learned trajectories is fixed (both seem like steps in the right direction).\", \"strengths\": \"-The discussion of related work and comparison to baselines is pretty extensive. For example I appreciated the ablation study removing \\\"transformed Q-learning\\\" and comparison to the pop-art method. \\n\\n -The results, at least for Ape-X DQfD seem impressive to me in that the method works without reward clipping and with a much higher discount factor. Additionally the results generally outperform DQfD (uses expert trajectories) and Rainbow (no human trajectories). Additionally evidence was presented that the learned policies often exceed the performance of the human demonstrations (for example in time to achieve rewards).\", \"weaknesses\": \"-Two of the techniques: \\\"transformed bellman\\\" and \\\"temporal consistency\\\" seem well-linked thematically, but the expert demonstration idea seems orthogonal. I would have preferred splitting that idea out into a separate paper, given that the paper is already 20 pages. \\n\\n -The motivation for temporal consistency just references (Durugkar 2018). The readability of this paper would be improved if it were discussed more here as well. I also feel like the analysis could be more thorough here, for example a result using the temporal consistency loss on Baird's counter example really should be shown (like figure 2 in Durugkar's paper). \\n\\n-It would be nice to see a visualization or a toy problem with the \\\"transformed bellman\\\".\", \"questions\": \"-Is the \\\"highest return episode\\\" idea (3.4) general or is it exploiting the fact that Atari is deterministic? It seems like in general we'd want to use many high reward episodes, or the highest reward episodes that go into different parts of state space. It seems like it could be a very bad idea on certain settings (for example if the reward has a lot of randomness). \\n\\n-\\\"Proposition 3.1 shows that in the basic cases when either h is linear or the MDP is deterministic, Th\\nhas the unique fixed point h \\u25e6 Q\\u2217\\\". From 3.1, it looks if h is linear, then it distributes over r(x,a) + maxh^{-1}(Q) and then it also won't effect which is the max, so it would reduce to h*r(x,a) + max(Q) - which means it's just rescaling the original reward. So then this result is trivial? Please correct me if I misunderstood something here. \\n\\n-Could an MDP be constructed which causes the transformed bellman operator to perform badly? I am imagining something where the MDP is just a single step, and there is a stochastic action which behaves like a lottery. So perhaps there is a 1-in-1-million chance to win 1-billion dollars by taking an action. If I understand correctly the transformed bellman operator will destroy the large reward here (because in a single step, there is just r(x,a) which h is applied to). Which would make the action seem bad even though it's actually appealing.\", \"notes\": \"-I did not read the proofs in the appendix.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJePno0cYm | Transformer-XL: Language Modeling with Longer-Term Dependency | [
"Zihang Dai*",
"Zhilin Yang*",
"Yiming Yang",
"William W. Cohen",
"Jaime Carbonell",
"Quoc V. Le",
"Ruslan Salakhutdinov"
] | We propose a novel neural architecture, Transformer-XL, for modeling longer-term dependency. To address the limitation of fixed-length contexts, we introduce a notion of recurrence by reusing the representations from the history. Empirically, we show state-of-the-art (SoTA) results on both word-level and character-level language modeling datasets, including WikiText-103, One Billion Word, Penn Treebank, and enwiki8. Notably, we improve the SoTA results from 1.06 to 0.99 in bpc on enwiki8, from 33.0 to 18.9 in perplexity on WikiText-103, and from 28.0 to 23.5 in perplexity on One Billion Word. Performance improves when the attention length increases during evaluation, and our best model attends to up to 1,600 words and 3,800 characters. To quantify the effective length of dependency, we devise a new metric and show that on WikiText-103 Transformer-XL manages to model dependency that is about 80% longer than recurrent networks and 450% longer than Transformer. Moreover, Transformer-XL is up to 1,800+ times faster than vanilla Transformer during evaluation. | [
"Language Modeling",
"Self-Attention"
] | https://openreview.net/pdf?id=HJePno0cYm | https://openreview.net/forum?id=HJePno0cYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJgCkbBfXV",
"Skx4CMJfQN",
"BJxw7iX4fN",
"Syx0H-YlbN",
"HJe5zCaEeV",
"rJeP_0nEx4",
"rkg1fe3xlE",
"S1xnzg5cyN",
"rJx2-lcqk4",
"Syxqex55JN",
"SyeITxE1JN",
"S1g-IItiRQ",
"Hylxe2VcA7",
"SyeGCoV5RX",
"rJeQ2jV50X",
"HkxXKjV5RX",
"Hkla0-dp27",
"rkgHT-tq3m",
"SJgjpKAko7"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1548009701538,
1547985611813,
1547086623371,
1545797957609,
1545031185604,
1545027183151,
1544761350973,
1544359955563,
1544359939710,
1544359921950,
1543614653730,
1543374408694,
1543289831662,
1543289802091,
1543289771051,
1543289723019,
1541403092693,
1541210556809,
1539463618595
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"~Rajarshi_Das1"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"~Noam_Shazeer1"
],
[
"ICLR.cc/2019/Conference/Paper717/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"~Anirudh_Goyal1"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/Authors"
],
[
"ICLR.cc/2019/Conference/Paper717/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper717/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper717/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"response\", \"comment\": \"Hi Rajarshi, thanks a lot for your comments. For the $m$, we define it in the paragraph right before section 3.3. It refers to the \\\"memory\\\", which can contain $h_{\\\\tau - 1}^{n-1}$ or additionally more faraway segments like $h_{\\\\tau - 2}^{n-1}$.\"}",
"{\"comment\": \"This paper makes great contributions and like many, I was sad to see it get rejected.\\n\\nI was looking at the equations closely and the final equations describing the model has a variable \\\"m\\\" which hasn't been defined before. Specifically, I am referring to the \\\"m\\\" within the stop-gradient operator in the equation below.\\n\\n$\\\\tilde{h}_{\\\\tau}^{n - 1} = [SG(m_{\\\\tau}^{n-1}) \\\\cdot h_{tau}^{n-1}]$\\n\\nThis set of equations does not have a direct dependence on $h_{\\\\tau - 1}^{n-1}$ (the previous segment), so I am guessing \\\"m\\\" is capturing it somehow and it is not very clear presently.\\n\\nThank you in advance for the clarification.\", \"title\": \"Variable \\\"m\\\" in equation (end of page 5)\"}",
"{\"title\": \"[DEPRECATED] This version is outdated\", \"comment\": \"We have released a new version of this paper on arxiv https://arxiv.org/abs/1901.02860\\nalong with code, pretrained models, hyperparameters, as well as new (even better) results.\\n\\nPlease refer to our arxiv version in your future work.\"}",
"{\"title\": \"response\", \"comment\": \"=== About the technical contribution ===\\n\\nFirstly, as we have explained in the rebuttal and paper, trivially applying truncated BPTT to Transformer will NOT work due to the temporal confusion caused by using the same absolute positional encoding on two consecutive segments. In this work, we identify that it is the temporal confusion problem which prevents the reuse of historical hidden states. More importantly, we figure out the confusion could be resolved by only injecting relative positional information. This process of identifying, analyzing and solving the problem is a non-trivial scientific process, as no other previous or contemporary work targeting at using self-attention for language modeling has provided such a solution despite the fact that everyone working in LM is familiar with truncated BPTT. \\n\\nAdditionally, to facilitate the learning of the recurrence mechanism, we also propose a more generalizable relative positional encoding and establish its non-trivial performance advantage in ablation.\\n\\nHence, we respectfully disagree with the argument that the proposed approach is \\u201ca rather trivial application of earlier approaches such as truncated backprop.\\u201d\\n\\n=== About the value of enabling recurrence for self-attention in the context of LM ===\", \"we_think_the_question_can_be_broken_into_three_sub_questions_with_different_levels\": \"(1) Is a better language model by itself important or not?\\n(2) What is the application value of a better self-attention LM that can utilize recurrence?\\n(3) Is it useful to create recurrence in self-attention in general, e.g., beyond the text domain?\\n\\nThe question (1) essentially asks whether we need a better density estimator for text. The answer to this question can be rather subjective and differ from person to person. That said, as one of the most fundamental statistical questions, density estimation should have its scientific values.\\n\\nFor question (2), it is not difficult to come up with a list of potential applications. Firstly, many document-level problems could benefit from the proposed model, such as the document-level summarization, translation, seqeuntial labeling, and reading comprehension. Note that these tasks don\\u2019t have to be restricted to text generation. Secondly, besides serving as an architecture for downstream tasks, language models can also be used to perform \\u201cunsupervised feature learning\\u201d as demonstrated by recent advancement in NLP [1,2]. Hence, given a language model that can better capture the contextual information, it is very likely the hidden representations within the language model are also superior.\\n\\nFinally, question (3) is concerned with whether the techniques proposed in this work can be applied to other domains other than language. On this matter, we believe there exists a common desire of capturing longer-term dependency in sequence modeling. For example, in the speech domain, the raw data often has a sample rate of 16K Hz, which means that each second of speech data is a sequence of 16K steps. Similarly, in the domain of time seriers analysis (e.g. sensor data), the sequence length can also be very long.\\n\\nIn summary, we believe language modeling is a reasonable testbed of model and algorithm development for NLP and more broadly sequence modeling.\\n\\n--------------------------------\\n[1] Peters, M., et. al. (2017) Deep contextualized word representations\\n[2] Devlin, J., et. al. (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your questions. We will publish our code along with our hyper-parameters on all the datasets very soon!\"}",
"{\"comment\": \"Very impressive results! For the billion-word benchmark, you are getting better perplexity numbers (23.5) than we have for models of comparable size (see https://arxiv.org/pdf/1811.02084.pdf). Since, as you mention, context length is not an issue for this dataset, I would like to know what you are doing better so that we can improve our own results. In particular, what are your settings for the following for the PPL=23.5 model:\", \"hyperparameters_as_defined_in_https\": \"//arxiv.org/pdf/1706.03762.pdf:\\n 1. Number of layers (n)\\n 2. Dimensionality of embedding matrices, layer inputs/outputs (d_model)\\n 3. Feed-forward hidden size (d_ff)\\n 4. Number of attention heads (h)\\n 5. key/value dimensionality (d_k), (d_v)\\n 7. Dropout rate\", \"in_addition\": \"6. Did you use the original ~800K-word vocabulary or a character-level or word-piece-level encoding scheme\\n 7. Was your setup based on the tensor2tensor library or other open-source implementation?\\n 8. Dropout rates\\n 9. Number of training epochs\\n\\nThank you in advance for the clarification.\", \"title\": \"Requesting details for billion-word-lm model hyperparameters\"}",
"{\"metareview\": \"despite the (significant) improvement in language modelling, it has always been a thorny issue whether better language models (at this level) lead to better performance in the downstream task or whether such a technique could be used to build a better conditional language model which often focuses on the aspect of generation. in this context, the reviewers found it difficult to see the merit of the proposed approach, as the technique itself may be considered a rather trivial application of earlier approaches such as truncated backprop. it would be good to apply this technique to e.g. document-level generation and see if the proposed approach can strike an amazing balance between computational efficiency and generation performance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}",
"{\"title\": \"update\", \"comment\": \"Dear reviewer, we believe we have addressed your concerns in the rebuttal (see the General Response above and the comments below). Especially, we have further improved over state-of-the-art results ever since. Do you have an updated assessment or other concerns of our paper? Thank you!\"}",
"{\"title\": \"update\", \"comment\": \"Dear reviewer, we believe we have addressed your concerns in the rebuttal (see the General Response above and the comments below). Especially, we have further improved over state-of-the-art results ever since. Do you have an updated assessment or other concerns of our paper? Thank you!\"}",
"{\"title\": \"update\", \"comment\": \"Dear reviewer, we believe we have addressed your concerns in the rebuttal (see the General Response above and the comments below). Especially, we have further improved over state-of-the-art results ever since. Do you have an updated assessment or other concerns of our paper? Thank you!\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for the comment.\\n\\n1. We don\\u2019t fully understand the suggestion. One interpretation is to evaluate the RNN and the proposed model on longer sequences than that used in training. In this case, since truncated BPTT is used in the training, one can always pass the last-step hidden state from the previous segment to the next segment as the initial state of the RNN. Hence, the RNN is actually evaluated on the entire text sequence.\\n\\n2. Thanks for pointing out this related work. We will check it and address the relationship properly in a later version.\"}",
"{\"comment\": \"Hello,\\n\\nI believe this paper is addressing an interesting problem i.e enabling self attention to scale. And I enjoyed reading this paper! And results of this paper are pretty interesting too! :-)\", \"few_points\": \"1. Generally while evaluating long term dependencies, I'm a bit skeptical about evaluating \\\"bpc\\\" or ppl. What has worked well for me in the past is to evaluate on longer sequences then it was trained for. As RNN's are generally trained using one step ahead prediction, so evaluating for longer sequences generally pose a more difficult problem. I personally always use this metric and only mostly use bpc for the sake of submitting papers. So, if authors have pre-trained model, I actually encourage them to use them and report (same) metric on longer sequences, as compared to the respective baselines. I also think, this might make the paper stronger, and also, it may increase the chances of paper getting accepted. :-) Again, good work.\\n\\n\\n2. I would also like to point, that we had a paper in which we propose to enable self attention to scale, \\\"Sparse Attentive Backtracking\\\" (NIPS'18/NeurIPS'18), our motivation was very different. https://arxiv.org/abs/1809.03702. It would be interesting if the authors can reference/cite this.\", \"title\": \"Interesting paper\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for your valuable comments!\\n\\n[Speed Comparison]\\nAs shown in our paper, Transformer is the state-of-the-art model on language modeling, and Al Rfou et al was the previous SoTA of Transformer language models. The main argument of our results on computational time is that Transformer-XL substantially improves the speed while getting even better results. It is less interesting to obtain speedup over a poorly performing model. On the other hand, as our speedup techniques specifically target Transformers, we believe Al Rfou et al is the most appropriate baseline to test the effects of our proposed methods.\\n\\nPlease see our comments above regarding the significance and novelty of our contributions.\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for your valuable comments!\\n\\n[WT2]\\nWT2 shares the same test set as WT103, and the only difference is that WT103 has more training data. Since language modeling has almost unlimited training data in nature, we believe it brings less benefit to compare models on more small-scale datasets as we already have results on Penn Treebank which is also a small dataset.\\n\\n[Test-time evaluation techniques]\\nIn Table 1, we show that our method without any test-time evaluation techniques is still 21+ points better than Grave et al which employs test-time continuous cache on WT103. On enwiki8, mLSTM + dynamic eval [1] achieves a BPC of 1.08, which is still 0.09 worse than Transformer-XL without dynamic evaluation. On One Billion Word, the best previous result did not use test-time evaluation techniques. The only exception is Penn Treebank, where we exclude results with test-time techniques to focus on comparing different architectures. This is fair comparison because all considered models do not use test-time techniques. Moreover, according to previous results, test-time evaluation techniques bring consistent improvement to different architectures (Yang et al 2017, Merity et al 2017).\\n\\nPlease see our comments above regarding the importance of language modeling on its own.\\n\\n[1] Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of\\nneural sequence models.\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for your valuable comments!\\n\\nAs far as we know, almost all language models were evaluated by perplexity in previous work.\\n\\nPlease see our comments above regarding the importance of language modeling on its own.\"}",
"{\"title\": \"General response to the reviewers and AC\", \"comment\": \"[Latest results and significance]\\nWe have experimented with increasing the model sizes so as to match the previous work for fair comparison. As a result, we have advanced the SoTA performance from 1.06 to 0.99 in bpc on enwiki8, from 33.0 to 18.9 in perplexity on WT103, and from 28.0 to 23.5 in perplexity on One Billion Word. Note that our result on enwiki8 is the first result below 1.0 bpc on widely-studied char-level LM benchmarks. We believe the improvement is significant compared to any previous results and we also believe this substantiates the significance of the proposed methods. Changes have been made accordingly in the paper.\\n\\n\\n[Why language modeling]\\nAlthough we believe our technique will be useful where long-term dependency is involved, with applications like paragraph-level machine translation, summarization, multi-paragraph question answering, text generation, etc, we would also like to emphasize that language modeling itself is important.\\n--- Firstly, language modeling has been an independent research direction in natural language processing (NLP) for decades [1-5]. Even when we restrict our attention to neural language models in the last two years, there has been a significant amount of work focused solely on this topic [6-18] in venues like ICLR, ICML, and NeurIPS.\\n--- Secondly, language modeling is an important unsupervised pretraining objective. The biggest advance in NLP recently originated from training large-scale language models for unsupervised feature learning [19,20].\\n\\n[1] Chen, S. F., & Goodman, J. (1996). An empirical study of smoothing techniques for language modeling\\n[2] Manning, C. D., & Sch\\u00fctze, H. (1999). Foundations of statistical natural language processing\\n[3] Bengio, Y., et. al. (2003). A neural probabilistic language model\\n[4] Mikolov, T. et al. (2010) Recurrent neural network based language model\\n[5] Zaremba, W., et. al. (2014). Recurrent neural network regularization\\n[6] Jozefowicz, R., et. al. (2016). Exploring the limits of language modeling\\n[7] Grave, E., et. al. (2016). Improving neural language models with a continuous cache\\n[8] Press, O., & Wolf, L. (2016). Using the output embedding to improve language models\\n[9] Krause, B., et. al. (2016). Multiplicative LSTM for sequence modeling\\n[10] Merity, S., et. al. (2016). Pointer sentinel mixture models\\n[11] Dauphin, Y. N., et. al. (2017). Language modeling with gated convolutional networks\\n[12] Merity, S., et. al. (2017). Regularizing and optimizing LSTM language models\\n[13] Melis, G., et. al. (2017). On the state of the art of evaluation in neural language models.\\n[14] Yang, Z., et. al. (2017). Breaking the softmax bottleneck\\n[15] Merity, S., et. al. (2018). An Analysis of Neural Language Modeling at Multiple Scales\\n[16] Rae, J. W., et. al. (2018). Fast Parametric Learning with Activation Memorization\\n[17] Kanai, S., et. al. (2018). Sigsoftmax: Reanalysis of the Softmax Bottleneck\\n[18] Al-Rfou, R., et. al. (2018). Character-level language modeling with deeper self-attention.\\n[19] Peters, M., et. al. (2017) Deep contextualized word representations\\n[20] Devlin, J., et. al. (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\\n\\n\\n[Our contributions and novelty]\\nWe believe Transformer-XL addresses an important problem. The key question we answer in this work is how to enable self-attention, an architecture which has a potential optimization advantage in learning long-term dependency, to really capture a longer context beyond a fixed length.\\n\\nOur main contribution is to propose a complete set of techniques that jointly enable recurrency in self-attention, rather than a set of unrelated, individual techniques.\\n--- As described in Section 3 and shown in Table 5, state reuse is not even possible without relative positional encodings, because the absolute positions in the current segment are not the same as in the next segment.\\n---Similarly, relative positional encodings alone do not improve the ability to model long-term dependency.\\n\\nAlthough our positional encodings share somewhat similar formulation to previous work such as Shaw et al, the motivation is completely different, and it is non-trivial to figure out such a combination of techniques for modeling long-term dependency with the self-attention architecture.\"}",
"{\"title\": \"This paper proposes a variant of transformer to train language model\", \"review\": \"This paper proposes a variant of transformer to train language model, it uses two modifications, one is the segment level recurrence with state reuse, the other is relative positional encoding, which significantly enhances the power to model long range dependency. Extensive experiments in terms of perplexity results are reported, specially on WikiText-103 corpus, significant perplexity reduction has been achieved.\\n\\nPerplexity is not a gold standard for language model, the authors are encouraged to report experimental results on real world applications such as word rate reduction ASR on BLEU score improvement machine translation. \\n\\nCiprian Chelba and Frederick Jelinek, Structured language modeling. Computer Speech and Language (2000) 14, 283\\u2013332. \\n\\nPeng Xu, Frederick Jelinek: Random forests and the data sparseness problem in language modeling. Computer Speech & Language 21(1): 105-152 (2007).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Using Transformer as a RNN cell applied to equal-length segments, good experimental results, but need to cover standard benchmarks and use SOTA decoding techniques for comparison.\", \"review\": \"This paper proposes a Transformer based RNN structure \\\"Transformer-XL\\\" to capture long-range contextual relations and targets on language model task. The idea is straightforward: it splits the input sequence into equal and fixed length segments, and recurrently apply the Transformer over the sequence of segments, in which the hidden states for the previous segment are treated as a memory to attend for the next segment.\\n\\nThis paper is well-organized and well-written, and easy to follow. The empirical results also demonstrate the proposed model can achieve SoTA performance on several word- and character-based language model benchmarks.\", \"pros\": \"1. The model is designed based on a careful engineering: 1) taking into account the history hidden states for long-term dependency modeling and 2) alignment scores calculated from multiple perspectives for relative position modeling and global significance capturing. In addition, in contrast to the previous Transformer-based language model, benefiting from the recurrent architecture, both training and decoding can be accelerated.\\n2. The experimental results show that the proposed Transformer-XL can surpass the baseline model and achieve new state-of-the-art perplexity or bpc on word- or char-based language model task. And, based on the proposed new metric, RECL, the analysis for context length modeling verifies the proposed model can make the best of long-range dependencies.\", \"cons\": \"1. The proposed model is ad-hoc and is only compatible with language model task. Is it possible to extend the proposed model to more general and practical tasks (e.g., seq2seq tasks)?\\n2. The absence of a popular language model benchmark, WikiText-2, which has been evaluated in most previous papers.\\n3. It is notable that there are no ubiquitous decoding techniques for the language used in both the proposed model and baselines, such as dynamical evaluation and continuous cache pointer. However, these techniques are essential for the RNN-LM baselines to achieve state-of-the-art performance, and has been standardly used in most previous works. Therefore, the comparison seems unfair.\", \"minor_comments\": \"In Figure 1 and 2, it is better to include a legend explaining the meaning of different colors for different nodes.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Marginal innovation\", \"review\": \"This paper puts forward a new schema for language modeling, especially for relationship between two parts far apart.\\n\\nThe experimental results on WikiText-103 are good, improving the STOA PPL by 9.0. On the other three datasets, however, there's little or no gain. The speed comparison should be carried out over more LM models, as Al-Rfou is not the fastest.\\n\\nThe writing is not very clear, especially around equations.\", \"overall_the_contribution_of_this_paper_is_marginally_incremental\": \"1. The major proposed idea is just to add one no-grad previous segment into the prediction for next segment. This is similar to Residual network idea but more simplified.\\n2. Using relative positional encoding is not a new idea, e.g. https://arxiv.org/pdf/1803.02155.pdf.\\n3. Reusing previous level/segment computation with gradient fixed is also not a big innovation.\", \"typo\": \"1. end of page 3, and \\\"W.\\\" denotes\\\".\\n2. The speed experiment should be put in the main text.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJlDnoA5Y7 | Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs | [
"Sachin Kumar",
"Yulia Tsvetkov"
] | The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for replacing the softmax layer with a continuous embedding layer. Our primary innovations are a novel probabilistic loss, and a training and inference procedure in which we generate a probability distribution over pre-trained word embeddings, instead of a multinomial distribution over the vocabulary obtained via softmax. We evaluate this new class of sequence-to-sequence models with continuous outputs on the task of neural machine translation. We show that our models obtain upto 2.5x speed-up in training time while performing on par with the state-of-the-art models in terms of translation quality. These models are capable of handling very large vocabularies without compromising on translation quality. They also produce more meaningful errors than in the softmax-based models, as these errors typically lie in a subspace of the vector space of the reference translations. | [
"Language Generation",
"Regression",
"Word Embeddings",
"Machine Translation"
] | https://openreview.net/pdf?id=rJlDnoA5Y7 | https://openreview.net/forum?id=rJlDnoA5Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xFzu2_gV",
"rkl1wzn_xV",
"SyxuuyT4gV",
"HJlRMi3NxE",
"SJxqY_-zxN",
"SJeoOxOWg4",
"rkx6JoVwpX",
"Hylv394DTm",
"BJlN4K4PTQ",
"r1xMA_VPa7",
"S1gy51o03X",
"HylClIbC2m",
"S1gObe4q2X",
"BkggPnwY2X",
"rklFQ4CPi7",
"B1xGjesyoX",
"HkxNHeoR57",
"B1gSLhr45m",
"B1eWBOb4cQ",
"r1xVvGVZ5m"
],
"note_type": [
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"comment"
],
"note_created": [
1545287697450,
1545286230632,
1545027439588,
1545026325629,
1544849537555,
1544810611090,
1542044388666,
1542044335239,
1542043947549,
1542043849892,
1541480327233,
1541441014269,
1541189631828,
1541139544286,
1539986465298,
1539448985554,
1539383356199,
1538706508898,
1538689081148,
1538503260105
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"ICLR.cc/2019/Conference/Paper716/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper716/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper716/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper716/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper716/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper716/Authors"
],
[
"(anonymous)"
],
[
"~Tzu-Hsiang_Lin1"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Answers and Clarifications\", \"comment\": \"As mentioned in an earlier response, in our experience, kappa increases really fast in the initial steps of training. With proper initialization, the value of kappa is never too small for loss or gradients to misbehave. But if they do, approximation will help. In fact, using the approximation all throughout the training works quite similarly. In the appendix, we suggest using the approximation with higher dimensional embeddings.\\n\\n(1) Yes, normalization of the pre-trained embeddings is required. Thanks for pointing it out, we will make it more explicit in the draft.\\n\\n(2) We do not use centering in the experiments reported in this paper. But it's an excellent suggestion. We ran some more experiments later and it does help improve performance.\\n\\n(3) These are the embeddings we used for special tokens: BOS: all zeros (but any vector not close to any of the embeddings in the vocabulary works well, so that the model does not predict it), EOS (trained as part of fastText), UNK (average of all the embeddings not part of the vocabulary. The aim again was to find a vector not too close to the embeddings in the vocabulary. The negative of the average of the all vectors in the vocabulary also works well). We will make this more explicit in the draft as well. Thank you\"}",
"{\"comment\": \"Thank you for your comment.\\n\\nFinally, NLLvMFF works in my implementation. \\nBut like mentioned below, the normalization constant with kappa of <1 would be 'inf' value and then the loss would be '-inf', and it is required to handle losses from these predictions.\\nIn the appendix, it is suggested to use the approximation in the initial steps and switch to the actual computation. Does this process aim to deal with lower kappa well?\\n\\nAnd furthermore, it seems that some points are unclear about the pre-processing of pre-trained embeddings.\\n\\n* Normalization\\nAs e(w) in vMF formula is a unit vector, is normalization of pre-trained embedding required?\\nUsually, embeddings obtained from FastText are not unit.\\n\\n* Centering\\nPrior works suggest centering embedding would improve performance.\\nHave you adjusted the center of pre-trained embedding?\\n\\n* BOS (FastText)\\nThis point is related to FastText.\\nAs BOS is needed by the decoder and does not exist in FastText, how do you handle BOS embedding?\", \"title\": \"Handling loss when kappa of <1 and more questions\"}",
"{\"title\": \"Training was quite stable in our experiments\", \"comment\": \"Thank you for your comment!\\n\\nAlthough this is true, in our experiments we find that the value of kappa (which is set to the norm of the output vector) increases beyond 1 quite early in the training process and training is quite stable with the recommended regularization. As mentioned in an earlier comment, we use double data type for computing Bessel function and switch back to float32 after computing its log. No trick other than this is required beyond what is mentioned already in the paper. \\n\\nWe didn't experience underflow with the embeddings we used (300 dimensional). It only happens when using embeddings for which m>300. Using the mentioned approximation also works quite well. \\n\\nAnd we do plan to open source our code very soon.\"}",
"{\"title\": \"use double instead of float\", \"comment\": \"Yes it is possibly because of using float32, we used double just for computing the bessel function and switched back to float after computing its log. And yes, we are planning on making our implementation public post publishment.\"}",
"{\"title\": \"Implementation\", \"comment\": \"I found a difficulty to implement this method.\\nIt seems that a na\\u00efve implementation does not work (or very unstable). \\nFor example, modified Bessel function of the first kind with $m=300$ takes a considerably small value, e.g., 1.228361698E-308 for z=1. $\\\\kappa^{m/2-1}$ with $m=300$ is also very peaky, and it may change the value drastically when $\\\\kappa$ is less than or larger than 1. \\nWe need a specific implementation trick to perform the training process appropriately.\\nAs the authors also pointed out in the Appendix, it may easily underflow.\\nThe details how they actually manage this hard situation is unclear.\\nThe performance of this method seems to depend on the implementation deeply.\\nDo you have a plan to open your code? Otherwise, it might be hard for readers to reproduce your experiments.\"}",
"{\"metareview\": \"this is a meta-review with the recommendation, but i will ultimately leave the final call to the programme chairs, as this submission has a number of valid concerns.\\n\\nthe proposed approach is one of the early, principled one to using (fixed) dense vectors for computing the predictive probability without resorting to softmax, that scales better than and work almost as well as softmax in neural sequence modelling. the reviewers as well as public commentators have noticed some (potentially significant) short comings, such as instability of learning due to numerical precision and the inability of using beam search (perhaps due to the sub-optimal calibration of probabilities under vMF.) however, i believe these two issues should be addressed as separate follow-up work not necessarily by the authors themselves but by a broader community who would find this approach appealing for their own work, which would only be possible if the authors presented this work and had a chance to discuss it with the community at the conference. therefore, i recommend it be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"weak accept\"}",
"{\"title\": \"Corrected the typo in the paper\", \"comment\": \"Thank you for your feedback!\", \"here_are_our_responses\": \"1. While these numbers are accurate, we didn\\u2019t face any problems during our mini-batch training. Could you elaborate on why do you think this could lead to hardness during training?\\n2. This was a typo in the paper, we have corrected it. Thanks for pointing it out!\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your detailed feedback. Here are our responses to your comments:\", \"weak_baseline\": \"As you pointed out, we show results for only greedy decoding to investigate its effectiveness in identical settings. We have since updated the paper with beam search results with the baseline model. The translation quality in our models is still on par or only slightly lower than the results with beam search.\", \"closed_vocabulary\": \"In IWSLT2016 datasets, the target vocabulary size is around 55000, and around 800,000 in WMT16. The vocabulary sizes we have chosen are not arbitrary but reflect the overlap of the target vocabulary with the pre-trained embedding vocabulary. In principle, it is possible to train the embeddings on a larger monolingual corpora to increase the overlap. But the words for which embeddings could not be found in the embedding table are likely very rare words such as named entities which we handle by using a copy mechanism. Moreover, although subword methods theoretically gives us open-vocabulary setting, they still perform poorly which translating such rare words (see Table 5) because it breaks those rare words into very small units that lose meaning.\", \"convergence_speed\": \"Due to space limitations, we only report the convergence speed (Figure 1) on one dataset. But the reported results are generated and averaged from multiple runs, and we have achieved consistent performance on all the datasets. We have updated those figures in the appendix. We also report total training time for all the datasets in Table 3 which are also averaged results across multiple runs.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your thorough feedback, we have updated the paper addressing your comments!\", \"here_are_our_replies_to_address_your_comments\": \"\", \"small_dataset\": \"We show extensive analysis on smaller machine translation datasets (IWSLT) because they take short time to train and hence easier to experiment with. But with our best model we show results on par with softmax based baselines on a much larger WMT German to English dataset with 4.5 million training instances showing the effectiveness of our proposed model in a much broader setting\", \"pre_trained_embeddings_being_a_hindrance_in_large_datasets\": \"We share your concern on this matter. One of our ongoing projects involves being able to update these output embeddings as part of the training as well. It is possible to do this directly with max-margin loss (which is a contrastive loss) by making the output embeddings trainable, but with other (pairwise) losses, it\\u2019ll lead to a degenerate solution (with all outputs as zeroes). We are currently exploring a wake-sleep-like algorithm to tackle this problem.\", \"closed_vocabulary\": \"This is a good point but the vocabulary can always be increased by training the embeddings on a larger monolingual corpora. Additionally, the words which wouldn\\u2019t exist in the vocabulary are most likely (1) proper nouns like named entities which can be handled by the copy mechanism we used in the paper, or (2) rare words. For the latter, although theoretically BPE allows open vocabulary decoding, in practice we see that our model performs much better than BPE baselines in particular on rare words (Table 5). It would be interesting to explore a combination of BPE and our proposed model in future work.\\n\\nNo, Predicting the word with highest probability using vMF has the same computational complexity as nearest neighbor search\\n\\nWe choose F1 to control for noise in the predicted sentence. Recall will only measure a reference word is produced. But by including precision, we measure what fraction of the predicted words are actually in reference. So a sentence producing all the words in reference but also a lot of garbage will be given less score.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your feedback! Here are our responses:\", \"beam_search\": \"As pointed in one of our earlier comments (https://openreview.net/forum?id=rJlDnoA5Y7¬eId=HkxNHeoR57), beam search is not impossible to do but with our proposed model but is not trivial as to just using k nearest neighbors as candidates. It requires substantial investigation due to which we leave it as future work. We included BLEU scores with just greedy search for fair comparison with baselines and keeping in line with earlier work with similar motivation to ours (https://arxiv.org/abs/1704.06918). But thank you for your suggestion, we have now updated the draft to include results with beam search as well. Note that translation quality in our models is still on par or only slightly lower than the results with beam search.\\n\\nSince the beam search is known to slow down decoding and there has been work in the past to get rid of it from softmax based architectures (https://openreview.net/forum?id=rJZlKFkvM, https://arxiv.org/pdf/1701.02854.pdf). The latter paper, for example, proposes a deterministic alternative to decoding where instead of sampling from softmax output at each step, you feed the entire softmax distribution to the next step. We plan to explore a similar approach in the future where the output vector is fed directly to the next step as opposed to finding it\\u2019s nearest neighbor and feeding that to the next step.\", \"convergence_time\": \"Convergence time, which is reflective of total training time is crucial factor in machine translation systems some of which can take weeks to train and is reported by the transformer paper as well (https://arxiv.org/abs/1706.03762). We report number of samples processed per second (Figure 1) instead of FLOPs (floating point operations) per second, as was reported in the transformer paper. The metrics correlate. We believe that FLOPs measure is more noisy because it's hard to keep GPUs utilized at 100%.\", \"use_of_elmo_or_cove\": \"This is a great suggestion, thank you. But as you pointed out, they are contextual embeddings and it\\u2019s not clear how to directly incorporate them. But it would an exciting future direction for this work.\"}",
"{\"comment\": \"I also like the idea of the proposed method as well and have some concerns about the following points.\\n\\n* Loss function\\n\\nIn the NLLvMF formula following the equation (2), the first term (normalization term) seems to be quite larger than the second term. Let m = 300 as described and k = 15 (empirically average norm of an activated vector with 300D), the loss would be about 440 and 10 for first and second term representatively. This may lead hardness to train in mini-batch.\\n\\n* Approximated ratio of Bessel function\\n\\nThe approximated ratio of Bessel function suggested in the appendix is different from that in Ruiz-Antoln & Segura, 2016 that proposed {I_v} / {I_(v-1)} approximatation, not {I_(v+1)} / {I_(v-1)}\", \"title\": \"Some concerns\"}",
"{\"title\": \"cool new approach with some limitations\", \"review\": \"This paper proposes to replace the softmax over the vocab in the decoder with a single embedding layer using the Von Mises-Fisher distribution, which speeds up training 2.5x compared to a standard softmax+cross entropy decoder. The goal is admirable, as the softmax during training is a huge time sink (the proposed approach does not speed up inference due to requiring a nearest neighbor computation over the whole vocab). The approach is evaluated on machine translation (De/F>En and En>F), and the results indicate that there is minor quality loss (measured by BLEU) when using vMF. One huge limitation of the approach is the lack of a beam search-like algorithm; as such, the model is compared to greedy softmax+CE decoders (I would like to see numbers with a standard beam search model as well just to emphasize the quality drop from the state-of-the-art systems). With that said, I found this approach quite exciting and it has potential to be further improved, so I'm a weak accept.\", \"comments\": [\"is convergence time the right thing to measure when you're comparing the two different types of models? i'd like to see something like flops as in the transformer paper.\", \"relatedly, it's great that you can use a bigger batch size! this could be very important especially for non-MT tasks that require producing longer output sequences (e.g., summarization).\", \"it looks like the choice of pretrained embedding makes a very significant difference in BLEU. i wonder if contextualized embeddings such as ELMo or CoVE could be somehow incorporated into this framework, since they generally outperform static word embeddings.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Neat idea backed by a solid technical contribution\", \"review\": \"This paper describes a technique for replacing the softmax layer in sequence-to-sequence models with one that attempts to predict a continuous word embedding, which will then be mapped into a (potentially huge) pre-trained embedding vector via nearest neighbor search. The obvious choice for building a loss around such a prediction (squared error) is shown to be inappropriate empirically, and instead a von Mises-Fisher loss is proposed. Experiments conducted on small-data, small-model, greedy-search German->English, French->English and English->French scenarios demonstrate translation quality on par with BPE, and superior performance to a number of other continuous vector losses. They also provide convincing arguments that this new objective is more efficient in terms of both time and number of learned parameters.\\n\\nThis is a nice innovation for sequence-to-sequence modeling. The technical contribution required to make it work is non-trivial, and the authors have demonstrated promising results on a small system. I\\u2019m not sure whether this has any chance of supplanting BPE as the go-to solution for large vocabulary models, but I think it\\u2019s very healthy to add this method to the discussion.\\n\\nOther than the aforementioned small baseline systems, this paper has few issues, so I\\u2019ll take some of my usual \\u2018problems with the paper\\u2019 space to discuss some downsides with this method. First: the need to use pre-trained word embeddings may be a step backward. It\\u2019s always a little scary to introduce more steps into the pipeline, and it\\u2019s uncomfortable to hear the authors state that they may be able to improve performance by changing the word embedding objective. As we move to large training sets, having pre-trained embeddings is likely to stop being an advantage and start being a hindrance. Second: though this can drastically increase vocabulary sizes, it is still a closed vocabulary model, which is a weakness when compared to BPE (though I suppose you could do both).\", \"smaller_issues\": \"First paragraph after equation (1): \\u201cthe hidden state \\u2026 t, h.\\u201d -> \\u201cthe hidden state h \\u2026 t.\\u201d\\n\\nEquation (2): it might help your readers to spell out how setting \\\\kappa to ||\\\\hat{e}|| allows you to ignore the unit-norm assumption of \\\\mu.\\n\\n\\u201cthe negative log-likelihood of the vMF\\u2026\\u201d - missing capital\\n\\nUnnumbered equation immediately before \\u201cRegularization of NLLvMF\\u201d: C_m||\\\\hat{e}|| is missing round brackets around ||\\\\hat{e}|| to make it an argument of the C_m function.\\n\\nIs predicting the word vector whose target embedding has the highest value of vMF probability any more expensive than nearest neighbor search? Does it preclude the use of very fast nearest neighbor searches?\\n\\nIt might be a good idea to make it clear in 4.3 that you see an extension to beam search for your method to be non-trivial (and that you aren\\u2019t simply leaving out beam search for comparability to the various empirical loss functions). This didn\\u2019t become clear to me until the Future Work section.\\n\\nIn Table 5, I don\\u2019t fully understand F1 in terms of word-level translation accuracy. Recall is easy to understand (does the reference word appear in the system output?) but precision is harder to conceptualize. It might help to define the metric more carefully.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"I have some concerns about this paper.\", \"review\": \"[clarity]\\nThis paper is basically well written. \\nThe motivation is clear and reasonable.\\nHowever, I have some points that I need to confirm for review (Please see the significance part).\\n\\n\\n[originality]\\nThe idea of taking advantage of von Mises-Fisher distributions is not novel in the context of DL/DNN research community.\\nE.g.,\", \"von_mises_fisher_mixture_model_based_deep_learning\": \"Application to Face Verification.\\n\\nHowever, as described in the paper, the incorporation of von Mises-Fisher for calculating loss function seems to be novel, to the best of my knowledge.\\n\\n\\n[significance]\\nUnfortunately, the experiments in this paper do not fully support the effectiveness of the proposed method. \\nSee below for more detailed comments.\\n\\n\\n*weak baseline (comparison)\\nAs an anonymous reviewer pointed out, the author should run baseline method with beam search if the authors aim to convince readers (including reviewers) for the effectiveness of the proposed method.\\nI understand that it is important to investigate the effectiveness of the proposed method in the identical settings. However, it is also important to compare the proposed method with strong baseline to reveal the relative effectiveness of the proposed method comparing with the current state-of-the-art methods. \\n\\n\\n* open vocabulary setting\\nI am confused whether the experimental setting for the proposed method is really in an open vocabulary setting or not.\\nIf my understanding is correct, the vocabulary sizes used for the proposed method were 50,000 (iwslt2016) and 300,000 (wmt16), which cannot be an open vocabulary setting. \\nIf this is correct, the applicability of the proposed method is potentially limited comparing with the subword-based approach.\\nIs there any comment for this question?\\n\\n\\n* convergence speed\\nI think the claim of faster convergence of the proposed method in terms of iteration may be misleading. This might be true, but it is empirically proven only by single dataset and single run. The authors should show more empirical results on several datasets or provide a theoretical justification for this claim.\\n\\n\\nOverall, basically I like the idea of the proposed method. \\nI also aim to remove the large computational cost of softmax in neural encoder-decoder approach.\\nIn my feeling, the proposed method should be a bit more improved for a recommendation of clear acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Training time isn't the only issue we tackle in this work\", \"comment\": \"Thank you for you response.\\n\\nAdaptive softmax can be categorized into structural approximations of softmax. We will update it in our background section (section 2). While it would achieve good gains in terms of training time over \\\"word based\\\" softmax, this gain would not be significant over BPE based models which already have a very small vocabulary. Moreover, \\\"word based\\\" softmax models don't perform on par with SOTA in many MT systems (see Table 2).\\n\\nOur main focus in this work was to get rid of the softmax layer which will also likely help in other tasks related to language generation. And we compared with MT baselines (BPE based) which are strong both in terms of speed and accuracy\"}",
"{\"comment\": \"Have you tried Adaptive softmax? It typically reduces the training time as well.\", \"title\": \"You may want to consider Adaptive softmax for comparison\"}",
"{\"title\": \"Responses and Clarifications\", \"comment\": \"Thank you for your feedback. Based on your comments, here are our responses:\\n\\n1) Initializing Embeddings: Following your comment, we have conducted experiments with initializing the embeddings in softmax based models. Our model still performs on par with those baselines: for example, for fr-en setup, initializing the embeddings gives a small gain of 0.2 BLEU, which is still in line with reported results. We'll update these results in the draft. \\n\\n2) Using BPE Embeddings: We have pointed out in footnote 11 that for different language pairs, different number BPE merge operations are often used. Moreover, the BPE operations are performed by using training data from both languages. This will require different target embeddings to be trained for different language pairs increasing the total training time per language pair. \\n\\n3) Decoding with Beam Search: In principle, it is possible to generate candidates for beam search as you pointed out by using K-Nearest Neighbors. But how to rank the partially generated sequences is not trivial (one could use the loss values themselves to rank, but initial experiments with this setting didn't result in significant gains). In this work, we focus on enabling training with continuous outputs efficiently and accurately giving us huge gains in training time. The question of decoding with beam search requires substantial investigation and we leave it as future work. This is in line with recent NMT work with similar motivation of alleviating softmax bottleneck problem (https://arxiv.org/pdf/1704.06918.pdf) who also do best-first decoding. \\nIt is noteworthy that beam search is not the only way to improve decoding quality with the proposed architecture. For example: this paper (https://arxiv.org/pdf/1701.02854.pdf) proposes a deterministic alternative to decoding where instead of sampling from softmax output at each step, you feed the entire softmax distribution to the next step. There are perhaps other ways of decoding which could be explored in future work, beam search is not the only option.\"}",
"{\"comment\": \"1. The link is fixed.\\n2. We can always choose to tie or not tie target embeddings for the standard NMT. How does it related to embedding pre-training? The target input/output embeddings in (Press & Wolf, 2016) refer to the matrix of size (H \\u00d7 V) in this paper, but the pre-trained embeddings have the size (m \\u00d7 V). Anyway, the proposed method leverages the knowledge of large monolingual corpora while baselines don't (but they could).\\n3. Do you mean the lower scores with no-tying embeddings are fairer to be compared?\\n4. You confirmed that the proposed method cannot use beam search, which is a disadvantage. It's not fair to compare with baselines with restricted ability -- using beam search.\\n5. Yes, this paper presents an interesting idea of directly predicting word embeddings accompanied with the Von Mises-Fisher loss function. But it has not been proved to be effective -- on par with or better than fair baselines.\", \"title\": \"not convinced\"}",
"{\"comment\": \"1. Your link is broken. The author have both results on whether the target embeddings is tied or not. (See Section 4.4 and Table 1.)\\n2. The reason why Beam Search cannot be directly applied is because Von Mises-Fisher loss is not a probability, and Beam search calculates the probability of a sequence.\\n\\nThe main idea here is a new objective(directly predicting word embeddings) with a loss function(Von Mises-Fisher) that works. As long as other settings are the same, the baselines should be reasonable enough.\", \"title\": \"Not so weak\"}",
"{\"comment\": \"1. A big advantage of the proposed method is using pre-trained embeddings on large monolingual corpora. This gives \\\"substantial improvements over softmax and BPE baselines in translating less frequent and rare words\\\". Pre-trained word embeddings are also useful for standard NMT ( http://aclweb.org/anthology/N18-2084.pdf ) but the authors failed to leveraging pre-trained embeddings for baseline models. BTW, embeddings could also be trained on BPEed corpora.\\n2. The nearest-neighbor decoding should support K-NN in principle. Why couldn't beam search be applied to the proposed method? Not using beam search also made the baselines weak.\", \"title\": \"Weak baseline models\"}"
]
} |
|
Byx83s09Km | Information-Directed Exploration for Deep Reinforcement Learning | [
"Nikolay Nikolov",
"Johannes Kirschner",
"Felix Berkenkamp",
"Andreas Krause"
] | Efficient exploration remains a major challenge for reinforcement learning. One reason is that the variability of the returns often depends on the current state and action, and is therefore heteroscedastic. Classical exploration strategies such as upper confidence bound algorithms and Thompson sampling fail to appropriately account for heteroscedasticity, even in the bandit setting. Motivated by recent findings that address this issue in bandits, we propose to use Information-Directed Sampling (IDS) for exploration in reinforcement learning. As our main contribution, we build on recent advances in distributional reinforcement learning and propose a novel, tractable approximation of IDS for deep Q-learning. The resulting exploration strategy explicitly accounts for both parametric uncertainty and heteroscedastic observation noise. We evaluate our method on Atari games and demonstrate a significant improvement over alternative approaches. | [
"reinforcement learning",
"exploration",
"information directed sampling"
] | https://openreview.net/pdf?id=Byx83s09Km | https://openreview.net/forum?id=Byx83s09Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lpnWwkgV",
"B1lbK9KWCX",
"rylLQ5Y-A7",
"rkxMFKYbAQ",
"SJxIO823n7",
"B1xRtDI937",
"BkxudvgP27",
"SJg6C-K8jm",
"rylspvI-9m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544675764927,
1542720120905,
1542720029735,
1542719866428,
1541355117634,
1541199749966,
1540978544093,
1539899861306,
1538512834801
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper715/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper715/Authors"
],
[
"ICLR.cc/2019/Conference/Paper715/Authors"
],
[
"ICLR.cc/2019/Conference/Paper715/Authors"
],
[
"ICLR.cc/2019/Conference/Paper715/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper715/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper715/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper715/Authors"
],
[
"~Ankesh_Anand1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper introduces a method for using information directed sampling, by taking advantage of recent advances in computing parametric uncertainty and variance estimates for returns. These estimates are used to estimate the information gain, based on a formula from (Kirschner & Krause, 2018) for the bandit setting. This paper takes these ideas and puts them together in a reasonably easy-to-use and understandable way for the reinforcement learning setting, which is both nontrivial and useful. The work then demonstrates some successes in Atari. Though it is of course laudable that the paper runs on 57 Atari games, it would make the paper even stronger if a simpler setting (some toy domain) was investigated to more systematically understand this approach and some choices in the approach.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Well written paper with an novel approach for exploration\"}",
"{\"title\": \"Author's response\", \"comment\": \"Thank you for the review and the comments.\\n\\nPlease note that in the meantime, we were able to run experiments on 55 of the Atari games. The new results support our initial findings and are included in updated version of the paper.\\n\\nIt is correct that in the bandit setting, the information-gain function with the unnormalized noise function (rho(s,a)) leads to the correct scaling of the regret-information ratio, such that the regret of IDS can be bounded. However it is not clear that this is necessarily the right choice when used in combination with deep reinforcement learning. In fact, the scaling of the reward differs significantly from game to game, which leads to different noise levels and values for the information gain function \\\\log(1+\\\\sigma^2_t(s, a)/\\\\rho^2(s, a)). We found that the normalized noise estimation yields better results and allows the agent to account for numerical differences across environments while favoring the same amount of risk across different games. Importantly, it preserves the signal needed for noise-sensitive exploration\\nand does not introduce a new tuning parameter. It also does not necessarily loosen the connection to IDS, which explicitly allows to design policies by using different information-gain functions.\\n\\nThe lower bound on the return variance was introduced only for numerical reasons. Further, it prevents the agent from overcommitting to low-variance actions. Even in the bandit case, the strategy degenerates as the noise variance of a single action goes to zero (because that way, the information gain of any action can be made arbitrarily large). Also, since the return variance is normalized, the values of return variance of different actions are relatively close to 1. Hence, a lower bound of 0.25 would not introduce significant difference. We note that we did not tune at all this value and selected it heuristically. We also conducted experiments without the lower bound on rho. While the per-game scores may slightly differ, the overall change in mean human-normalized score was only 23%. This is added to the revised version of the paper.\\n\\nTo clarify the way in which \\\\rho(s, a)^2 is computed in Algorithm 1: The bootstrap heads are used only to compute the predictive parametric uncertainty \\\\sigma(s,a)^2. The return uncertainty \\\\rho(s,a)^2 is computed based only on the output Z(s,a) of the distributional head. We have added the exact formula for Var(Z(s,a)) at the end of page 6 in the paper.\\n\\nCan you also please clarify your note about the color codes in Figure 1?\"}",
"{\"title\": \"Author's response\", \"comment\": \"Thank you for the review and the comments.\\n\\nWe first like to report, that in the meantime we were able to run our experiments on 55 Atari games simulated via the OpenAI gym interface. The result table is updated in the revised version of our paper and supports our initial findings: The homoscedastic DQN-IDS achieves a score of 757;187 (%mean; %median), and the heteroscedastic C51-IDS achieves 1058;253 which is competitive with IQN (1048; 218).\", \"regarding_the_concern_that_the_gain_of_c51_ids_is_due_to_more_exploitative_actions\": \"It is true that the main difference between DQN-IDS and C51-IDS is that C51-IDS tends to favor actions with lower return uncertainty (risk). However, the improved performance is unlikely to be due to more extensive exploitation. First of all, the results in Table 1, 3 and 4 are based on evaluation scores. These evaluation scores are obtained by running the agents with an evaluation policy which is computed in the same way for both DQN-IDS and C51-IDS and acts greedily w.r.t. the mean of all bootstrapped heads (Eq. 10). If C51-IDS was only focusing at exploitation during training (i.e. the data-collection process, while the IDS policy is being run), it would not be able to explore sufficiently and would likely converge to a suboptimal policy. Hence we would observe worse evaluation scores compared to DQN-IDS, which is not the case demonstrated by the overall results. Furthermore, even though actions with lower return uncertainty have higher information gain (as computed by C51-IDS), this does not necessarily lead to exploitation, as the choice additionally depends on the amount of parametric uncertainty as well as the ratio between regret and information gain (see also the Gaussian process example in Fig. 1). Additionally, it is not necessarily true that an action with a lower return uncertainty would be the greedy one.\\n\\nIn terms of the comparison between Bootstrapped DQN and DQN-IDS, we previously ran some experiments on Bootstrapped DQN using the Adam optimizer and observed very little difference compared to RMSProp. We agree that a fair comparison would require running Bootstrapped DQN with the Adam Optimizer. We have corrected our claim in the paper. However, since this is not the focus of our paper and given the available computational resources, we will be unable to include Bootstrapped DQN results with the Adam optimizer over all 57 Atari games. We will also release the code after the final decision, which includes our implementation of Bootstrapped DQN.\"}",
"{\"title\": \"Author's response\", \"comment\": \"Thank you for the review and the suggestions.\\n\\nWe first like to report, that in the meantime we were able to run our experiments on 55 Atari games simulated via the OpenAI gym interface. The result table is updated in the revised version of our paper and supports our initial findings: The homoscedastic DQN-IDS achieves a score of 757;187 (%mean; %median), and the heteroscedastic C51-IDS achieves 1058;253 which is competitive with IQN (1048; 218).\", \"to_clarify_the_concern_raised_on_propagating_the_distributional_loss\": \"We emphasize that we chose not to propagate the distributional loss into the full C51-IDS network and use the C51 distribution only for control. This allows us to isolate the effect of noise-sensitive exploration and gives a fair comparison between DQN-IDS and C51-IDS. This is not a limitation of our approach and we would expect an additional performance gain by propagating distributional gradients computed on a distributional loss like C51 or QR-DQN. This remark has been added to the paper.\\n\\nWe also conducted experiments without the lower bound on rho. While the per-game scores may slightly differ, the overall change in mean human-normalized score was only 23%. This as well is mentioned in the revised version of the paper.\\n\\nIn terms of the choice of parametric uncertainty estimator, we selected Bootstrapped DQN since it allows computing the predictive distribution variance, without the need for any sampling. We also briefly experimented with Neural Bayesian Linear Regression (Snoek et al, 2015), but we found Bootstrapped DQN to yield better results. However, as discussed in the related work section, we acknowledge there are other ways of estimating parametric uncertainty, such as noisy nets, Monte Carlo methods, Bayesian Dropout, etc.\\n\\nThe comparison to intrinsic motivation is re-phrased in the updated version of the paper.\"}",
"{\"title\": \"Good idea, well described, could use more experimental results\", \"review\": \"Combining the parametric uncertainty of bootstrapped DQN with the return uncertainty of C51, the authors propose a deep RL algorithm that can explore in the presence of heteroscedasticity. The motivation is quite well written, going through IDS and the approximations in a way that didn't presume prior familiarity.\\n\\nThe core idea seems quite sound, but the fact that the distributional loss can't be propagated through the full network is troubling. The authors' choice of bootstrapped DQN feels arbitrary, as a different source of parametric uncertainty might be more compatible (e.g. noisy nets), and this possibility isn't discussed.\\n\\nThe computational limitations are understandable, but the authors should be more transparent about how the subset of games were selected. A toy example would have actually added quite a bit, as it would nice to see that the extent to which this algorithm helps is proportional to the heteroscedasticity in the environment. The advantage of DQN-IDS over bootstrapped suggests that something other than just the sensitivity to return variance is causing these improvements.\\n\\nIdeally, results with and without the heuristically chosen lower bound (rho) would be presented, as its unclear how much this is needed and its presence loosens the connection to IDS.\\n\\nThis is a small point, but the treatment of intrinsic motivation (i.e. changing the reward function) for exploration seems overly harsh. Most of these methods are amenable to experience replay, which would propagate the exploration signals and allow for \\\"deep\\\" exploration. The fact that they often change the optimal policy should be enough motivation to not discuss them further.\", \"edit\": \"I think dealing with the lower bound and including plots for all 55 games pushed this over the edge. It would've been nice if there non-zero scores on Montezuma's Revenge, but I know that is a high bar for a general purpose exploration method. In general I think this approach shows great promise going forward score 6-->7\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The authors propose a way of extending Information-Directed Sampling (IDS) to reinforcement learning. The proposed approach uses Bootstrapped DQN to estimate parametric uncertainty in Q-values, and distributional RL to estimate intrinsic uncertainty in the return. The two types of uncertainty are combined to obtain a simple exploration strategy based on IDS. The approach outperforms a number of strong baselines on a subset of 12 Atari 2600 games.\\n\\nClarity - I found the paper to be very well-written and easy to follow. Both the background material and the experimental setup were explained very clearly. The main ideas were also motivated quite well. It would have been nice to include a bit more discussion of why IDS is a good strategy, i.e. what are the theoretical guarantees in the bandit case? Section 3.2 could also provide a more intuitive argument.\\n\\nNovelty - The paper essentially combines the IDS formulation of Kirschner & Krause, Bootstrapped DQN of Osband et al., and the C51 distributional RL method of Bellemare et al. Most of the novelty is in how to combine these ideas effectively in the deep RL setting, which I found sufficient.\\n\\nSignificance - Improving over existing exploration strategies for deep RL would be a significant achievement. While the results are impressive, I have a few concerns regarding some of the claims.\\n\\nThe subset of games used to evaluate the proposed approach seems to be biased towards games where there is either a dense reward or exploration is known to be easy. Almost every deep RL paper on exploration includes results for at least some of the hard exploration games (see \\u201cUnifying Count-Based Exploration and Intrinsic Motivation\\u201d). Why were these games excluded from the evaluation? The results would be much stronger if results on all 57 games were included.\\n\\nThe main difference between DQN-IDS and C51-IDS is that C51-IDS will tend to favor actions with lower return uncertainty. Doesn\\u2019t this mean that the improved performance of C51-IDS is due to an improved ability to exploit rather than explore? If this is indeed the case, then I would expect more evidence that this doesn't come at a cost of reduced performance on tasks where exploration is difficult.\\n\\nFinally, the comparison between Bootstrapped DQN and DQN-IDS conflates the exploration strategies (IDS vs Thompson sampling) with the choice of optimizer (Adam vs RMSProp), so the claim that simply changing the exploration strategy to IDS leads to a major improvement is not valid. It would be interesting to see results for Bootstrapped DQN using the authors\\u2019 implementation and choice of optimizer to fully separate the effect of the exploration strategy.\\n\\nOverall quality - This is an interesting paper with some promising results. I\\u2019m not convinced that the proposed method leads to better exploration, but I think it still makes a valuable contribution to the work on balancing exploration and exploitation in RL.\\n\\n-------\\n\\nThe rebuttal and revisions addressed some of my concerns so I am increasing my score to 7\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The main input of this paper is to combine Information Direct Sampling and Distributional Reinforcement Learning for handling heteroscedasticity of noise in Reinforcement Learning.\", \"review\": \"This paper investigates sophistical exploration approaches for reinforcement learning. Motivated by the fact that most of bandit algorithms do not handle heteroscedasticity of noise, the authors built on Information Direct Sampling and on Distributional Reinforcement Learning to propose a new exploration algorithm family. Two versions of the exploration strategy are evaluated against the state-of-the-art on Atari games: DQN-IDS for homoscedatic noise and C51-IDS for heteroscedastic noise.\\n\\nThe paper is well-written. The background section provides the clues to understand the approach. In IDS, the selected action is the one that minimizes the ratio between a squared conservative estimate of the regret and the information gain. Following (Ktischner and Krause 2018), the authors propose to use \\\\log(1+\\\\sigma^2_t(a)/\\\\rho^2(a)) as the information gain function, which corresponds to a Gaussian prior, where \\\\sigma^2_t is the variance of the parametric estimate of E[R(a)] and \\\\rho^2(a) is the variance of R(a). \\\\sigma^2_t is evaluated by bootstrap (Boostrapped DQN). Where the paper becomes very interesting is that recent works on distributional RL allow to evaluate \\\\rho^2(a). This is the main input of this paper: combining two recent approaches for handling heteroscedasticity of noise in Reinforcement Learning.\", \"major_concern\": \"While the approach is appealing for handling heteroscedastic noise, the use of a normalized variance (eq 9) and a lower bound of variance (page 7) reveal that the approach needs some tuning which is not theoretically founded. \\nThis is problematic since in reinforcement learning, the environment is usually assumed to be unknown. What are the results when the lower bound of the variance is not used? When the variance of Z(a) is low, the variance of the parametric estimate should be low also. It is not the case?\", \"minor_concerns\": \"The color codes of Figure 1 are unclear. The color of curves in subfigures (b) (c) (d) corresponds to the color code of IDS.\\n\\nThe way in which \\\\rho^2(s,a) is computed in algorithm 1 is not precisely described. In particular page 6, the equation \\\\rho^2(s,a)=Var(Z_k(s,a)) raises some questions: Is \\\\rho evaluated for a particular bootstrap k or is \\\\rho is averaged over the K bootstraps ?\\n_____________________________________________________________________________________________________________________________________________\\n\\nI read the answers of authors. I increased my rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to comment\", \"comment\": \"Thank you for the comment. The currently reported range of games was chosen in the following way. We first selected 3 games on which convergence was relatively quick (BeamRider, RoadRunner, Enduro) so that we can more easily tune our algorithm. The rest of the games were chosen as a combination of games on which Bootstrapped DQN and C51 achieve improvement over the baseline. In particular, we wanted to evaluate the homoscedastic version of our algorithm (DQN-IDS) against the best scores that Bootstrapped DQN achieves. Additionally, high C51 scores indicate that C51 achieves good estimate of the return distribution and we wanted to test whether our algorithm (C51-IDS) can benefit from this and improve over C51. Note that the selection also includes games on which C51 achieves poor results.\\n\\nWe are currently in the process of evaluating our method on more games, and we expect to get further results until the rebuttal period. The scores will be included in the revised version of the paper.\"}",
"{\"comment\": \"I like the idea of this paper of extending Information-Directed Sampling to large state spaces. I also appreciate the computational constraints, and why the authors decided to test on only 12 Atari environments, but I was a bit perplexed by the choice of environments. Shouldn't the games that are known particularly to be hard to explore, such as Montezuma's Revenge, Pitfall and PrivateEye been evaluated? The games that the paper tested on not actually hard exploration problems (except Frostbite arguably).\", \"title\": \"Choice of Atari environments\"}"
]
} |
|
SylU3jC5Y7 | ADAPTIVE NETWORK SPARSIFICATION VIA DEPENDENT VARIATIONAL BETA-BERNOULLI DROPOUT | [
"Juho Lee",
"Saehoon Kim",
"Jaehong Yoon",
"Hae Beom Lee",
"Eunho Yang",
"Sung Ju Hwang"
] | While variational dropout approaches have been shown to be effective for network sparsification, they are still suboptimal in the sense that they set the dropout rate for each neuron without consideration of the input data. With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss. To overcome this limitation, we propose adaptive variational dropout whose probabilities are drawn from sparsity-inducing beta-Bernoulli prior. It allows each neuron to be evolved either to be generic or specific for certain inputs, or dropped altogether. Such input-adaptive sparsity-inducing dropout allows the resulting network to tolerate larger degree of sparsity without losing its expressive power by removing redundancies among features. We validate our dependent variational beta-Bernoulli dropout on multiple public datasets, on which it obtains significantly more compact networks than baseline methods, with consistent accuracy improvements over the base networks. | [
"Bayesian deep learning",
"network pruning"
] | https://openreview.net/pdf?id=SylU3jC5Y7 | https://openreview.net/forum?id=SylU3jC5Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xyARtglE",
"Skg3CTn00m",
"SylamO6cRQ",
"S1gDLqIdRQ",
"Bkg_LDLdRQ",
"ByeY-DL_07",
"BygPuUI_AX",
"HyejvX5hnQ",
"HyeKbrJc27",
"Byx1wGww2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544752839389,
1543585235750,
1543325732957,
1543166543263,
1543165775908,
1543165697419,
1543165551268,
1541346146538,
1541170433103,
1541005911423
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper714/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper714/Authors"
],
[
"ICLR.cc/2019/Conference/Paper714/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper714/Authors"
],
[
"ICLR.cc/2019/Conference/Paper714/Authors"
],
[
"ICLR.cc/2019/Conference/Paper714/Authors"
],
[
"ICLR.cc/2019/Conference/Paper714/Authors"
],
[
"ICLR.cc/2019/Conference/Paper714/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper714/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper714/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes Variational Beta-Bernoulli Dropout,, a Bayesian method for sparsifying neural networks. The method adopts a spike-and-slab pior over parameter of the network. The paper proposes Beta hyperpriors over the network, motivated by the Indian Buffet Process, and propose a method for input-conditional priors.\\n\\nThe paper is well-written and the material is communicated clearly. The topic is also of interest to the community and might have important implications down the road.\\n\\nThe authors, however, failed to convince the reviewers that the paper is ready for publication at ICLR. The proposed method is very similar to earlier work. The reviewers think that the paper is not ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Answer to the concerns\", \"comment\": \"Thanks for your comment.\\n\\nComparison to generalized dropout \\nGeneralized dropout is similar to our beta-Bernoulli dropout in a sense that it places a beta prior Beta(alpha, beta) on the mask probability pi. The generalized dropout has several variants according to the choice of hyperparameters alpha and beta, and does not always lead to sparsity promoting algorithms. To get sparsity promoting effect, one may choose Dropout++(0) where alpha > 1 and beta = 1, or SAL where alpha < 1 and beta < 1. Beta-Bernoulli dropout does not correspond to any of these cases since we set alpha << 1 and beta = 1. The more important difference is in the learning procedure. Generalized dropout computes the point estimate of the mask probability pi, and optimize it via heuristics. In beta-Bernoulli dropout, we estimate an approximate posterior distribution of pi q(pi) instead of point estimates, with theoretically grounded concrete-Bernoulli gradient approximation, and these result in more robust results. \\n\\nTying of q and p, why use p in testing?\\nWe wil try to clarify the tying of q(z_k|pi_k) and p(z_k|pi_k), and clarify that KL divergence between them vanishes. The reason for this choice is similar to the input dependent ones.\\n\\nRegarding the testing, let x_* be a test instance and (X, Y) be a training set. We have\\np(x_*| X, Y) = \\\\int p(x_* | W, z_*) p(z_*|pi) p(W|X, Y) p(pi | X, Y) dpi dW dz_*,\\nand p(W|X, Y), p(pi|X, Y) are approximated with learned q(W), q(pi). However, the binary mask z_* does not depend on the training set (X, Y), so the sampling should be done with p(z_*|pi). For training things are different because once we have observed labels Y the mask z depends on those labels, so the mask sampling should be done with true posterior p(z|pi, X, Y) approximated with q(z|pi). In our case, since we tie p and q, we use the same sampling distribution in training/testing.\\n\\nSBP implementation issue\\nWe have observed that the performance of SBP is sensitive to kl-scaling, initial learning rate from the issue of summing or averaging ELBOs, and the effect of batch normalization in the phase of fine-tuning. First off, we have found that the official code was implemented to employ kl-scaling in a factor of two. For a fair comparison, we didn\\u2019t use any kl-scaling for all baseline methods. Second, it is very critical to select an appropriate initial learning rate for achieving the reasonable balance between sparsity and accuracy, because the averaging ELBOs yields a different set of appropriate learning rates. Finally, we have observed that batch normalization plays a crucial role in achieving the competitive performance of SBP. When batch normalization always performs as a test mode in fine-tuning the network (i.e. using the statistics obtained in the pretraining phase), the performance of SBP is quite increased. We have found this interesting behavior of SBP in our code and the official code released by the authors. Now, we double-checked the code and tried our best to reproduce the results of all baseline methods including SBP.\"}",
"{\"title\": \"Thank you for the clarifications. The paper is in a better state now but I still have my concerns.\", \"comment\": [\"Thank you for addressing my comments and for putting in the effort to revise the submission.\", \"Regarding the Generalized Dropout / relaxation: Indeed the continuous relaxation becomes equivalent to the Bernoulli distribution when the temperature -> 0. However I wouldn't call it a gradient estimator in this case, as the derivative is zero everywhere except one point where it is infinite. Furthermore, while I appreciate the experimental comparison with Generalized Dropout, I would have liked a comparison / discussion on a more theoretical level that presents the advantages and drawbacks of each approximate posterior / prior hyperparameters.\", \"Regarding the tying of q with p: While the tying of the input dependent prior and posterior over z was explained via the footnote, the explanation for the global case (i.e. not input dependent) was missing. It will be better to refer to the footnote in both cases, if the explanation is the same. Furthermore, why is testing done via p(z_nk | pi_k) and not q(z_nk| pi_k)? Usually, in order to obtain the predictive distribution we average over the posterior rather than the prior.\", \"Regarding Figure 1: It makes sense that this can happen when \\\\beta is a vector. From the captions above the figure, \\\\beta was not bold typed so I assumed that it was a scalar. It would be better if you clarify this in text in order to avoid future misconceptions.\", \"Regarding the results from SBP: I don't see how summing vs averaging can have such a difference on the results; it just changes the effective learning rate. In this case, you should have been able to replicate the same results if you just increased the learning rate by a factor of N.\", \"Overall, I believe that the submission is in a better shape now. Nevertheless, I am still not sure if it is good enough for a publication so I will not change my original score.\"]}",
"{\"title\": \"Summary of the updates\", \"comment\": \"Dear reviewers,\\n\\nThank you for your valuable comments and sorry for the late response. We summarized the updates in the revision below.\\n\\nMain updates\\n- To address the comment of Reviewer 1 regarding the relation to generalized dropout [Srinivas and Babu, 2016] and the comment of Reviewer 3 regarding the comparison to the recent method, we implemented generalized dropout and VIBNet [Dai et al, 2018] and included the results in the main paper and appendix.\\n\\n- Considering the comment of Reviewer 1 saying that the figures for LeNet experiments are hard to interpret, we replaced the plots with tables. We also presented the tradeoff results for VGG-like net on CIFAR10 and CIFAR100. In the main text, the representative results with basic sparsity settings are presented with actual number of neurons/filters learned. In the appendix, we presented the results with various sparsity levels, to highlight the tradeoff between sparsity and accuracy the pruning methods. Our general observation is that BB and DBB achieve highest accuracy given similar sparsity levels.\\n\\nMinor edits\\n\\n- According to the suggestion of Reviewer 3, we updated the errors to be reported with mean and standard deviations. The number of neurons/filters remaining, speedup in FLOPs, runtime memory savings are still reported in median.\\n- We edited equation (12), (22) and (23) according to the suggestion of Reviewer 1.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"- Misleading section 3.2.\\nWhile our work was first motivated by IBP, we agree on your point that introducing IBP might not be necessary. We will restructure the paper such that we first introduce beta-Bernoulli process, and then briefly highlight its relationship to the IBP as an infinite limit.\\n\\n- Main contribution.\\nThanks for your suggestion. We totally agree on your point that our data dependent beta-Bernoulli (DBB) can be applied to other sparsity inducing priors. As you mentioned, we can train DBB from scratch, while generally it produces less accurate results with similar sparsity. \\n\\n- Missing several recent works.\\nWe implemented VIBNet (Dai et al., 2018) and included the results. Please refer to the updated paper. We also changed medians to mean +/ standard deviations for all tables. Currently the speedup/memory savings are theoretical values. As you mentioned, the real speedups or memory usages depend on various factors such as DL framework or hardwares.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"- Regarding generalized dropout:\\nThanks for pointing out that the continuous relaxation of binary latent variable gives biased gradient estimates. We stated that we are using 'asymptotically unbiased' gradient estimator, since the continuous relaxation becomes unbiased as the temperature parameter goes to zero. We also conducted series of experiments with generalized dropout and updated the revision. As you pointed out, beta-Bernoulli dropout (BB) is similar in that beta prior is placed on the dropout probabilities, but BB uses different gradient estimator and the range of hyperparameters for beta prior is different. \\n\\n- Condition alpha < K: \\nAs you mentioned, the sparsity of IBP is valid for infinite K, but also holds for finite but large K (alpha << K). As we mentioned in the main text, we fixed alpha/K = 1e-4 for all experiments (we controlled alpha according to K (number of neurons/filters)).\\n\\n- Index \\\"n\\\" in Eq.11 doesn't make sense. \\nEven though we have global pi_k, we sample local z_n for each data point (i.e., z_1, z_2, \\u2026 z_n ~(iid) Bern(pi)). This is related to the local reparametrization trick [Kingma et al. 2015], and reduces the variance of the gradient estimator.\\n\\n- Tying q(z_nk|pi_nk)=p(z_nk|pi_k).\\nWe explained the motivation for this choice in footnote 1 of page 6 in the paper. The main reason is to keep consistency between training and testing phase, since training is done with q(z_nk|pi_nk) and testing is done with p(z_nk|pi_k). \\n\\n- Figure 1 is misleading\\nEach block of figure one represents a histogram of set of activation values. In third block, a bias vector beta is added, and due to the prior placed on beta, only small number of dimensions in beta has large value. Hence, the result of adding beta to the set of activations yields a bimodal distribution as in the third block of Figure 1. \\n\\n- Motivation for equation 21\\nBy insignificant dimensions we mean the dimensions whose activations are negative or close to zero, so the result of hard sigmoid function would result in mask probabilities close to zero. The epsilon prevents the overflow in the computation of logits, log (p/(1-p)). As we mentioned above, for beta, we want only small number of dimensions to be large, so that the result of adding beta would produce only small number of hard-sigmoid become close to one.\\n\\n- Better to rewrite eq. 22, 23\\nThanks for your suggestion. We have updated the draft based on your comments.\\n\\n- Figure 1 and 2 are difficult to interpret\\nIn the appendix, we provided all the figures in table instead of figures. Please refer to the revised paper.\\n\\n- Baseline results are not consistent with the reported ones\\nWe re-implemented all the baselines by ourselves and trained them under unified settings. The result of pruning algorithms depends heavily on the hyperparameters such as learning rate, batch size, number of iterations. We faithfully tuned all the hyperparameters of baseline algorithms and reported the best results. For instance, the codes for SBP released by the authors uses objective function \\\\sum_{i=1}^n ELBO(x_i), while our code uniformly uses the objective function (\\\\sum_{i=1}^n ELBO(x_i))/n, and this often makes big difference. We will release all the codes used to produce our experimental results upon acceptance of our paper.\\n\\n[Kingma et al. 15] Variational Dropout and the Local Reparametrization Trick, NIPS 2015\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"- I don't have an understanding of why learning a node-specific dropout rate should improve dropout.\\nLearning of node-specific dropout rate has been explored in several previous work, and it has been consistently reported that they improve generalization performance of deep neural networks over random ones. To list a few, [Kingma et al. 15] proposed to learn the variance of the multiplicative noise of Gaussian dropout, and [Gal et al. 17] proposed to learn the Bernoulli dropout rate by approximating it with concrete distribution.\\n\\nOur model is Bayesian and is a way to convert regular deep neural networks into Bayesian neural networks by approximating the optimal Bernoulli \\\"noise\\\" to the neurons. It is obvious that learning the distribution of the noise is a better approximation than setting it to some arbitrary distribution and hope that they coincide. Further, learning of per-neuron dropout rate means that we approximate the true distribution of an unknown noise distribution with a more accurate distribution compared to the use of a single distribution across all neurons.\\n\\nAnother, more intuitive way to understand the generalization improvement with learned dropout is that injecting noise to neurons, if we propagate the noise to the input, is the same as injecting noise to inputs. Thus, dropout can be viewed as a data augmentation process to simulate inputs that may arrive at test time. Here, learning noise rate differently per neuron allows us to generate more relevant perturbations than random perturbations. \\n\\n- While there are many approximations introduced to make it work, if the sparsity z is something to be learned then why is it only being sampled from the beta prior in (15)? \\nPlease note that the goal of Eq (12) and below (including Eq (15)) is to define our variational distribution in order to approximate intractable true posterior. Under the variational inference framework, both prior Eq (11) and likelihood play crucial roles (this is the standard principal of variational inference; see Eq (3)). Overall, the sparsity inducing beta prior acts as a regularizer so that the model will find a configuration where the number of neurons activated is minimized (minimizing KL[q||p]) while maintaining the accuracy from the likelihood perspective (maximizing log p(X|theta)).\\n\\n[Kingma et al. 15] Variational Dropout and the Local Reparametrization Trick, NIPS 2015\\n[Gal et al. 17] Concrete Dropout, NIPS 2017\"}",
"{\"title\": \"Confusion about inference\", \"review\": \"The authors propose a dropout method that uses the beta-Bernoulli process to learn the sparsity rate for each node.\\n\\nThe model itself make sense to me, though I don't have an understanding of why learning a node-specific sparsity rate should improve dropout -- i.e., what is there to learn? From what I understand about dropout, it's a stochastic method that has the same marginal as the original model, but because of the randomness induced it avoids bad local optimal solutions. Thus it's a learning trick, not a modeling technique. This treats dropout as something to directly model.\\n\\nMy confusion is mainly about inference. While there are many approximations introduced to make it work, if the sparsity z is something to be learned then why is it only being sampled from the beta prior in (15)? There is a likelihood term that incorporates z as well and it seems like this should be included as well to be strictly correct from a modeling standpoint. I didn't see any explanation in the discussion.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper that needs more work\", \"review\": [\"This work proposes Variational Beta-Bernoulli Dropout, a Bayesian way to sparsify neural networks by adopting Spike and Slab priors over the parameters of the network. Motivated by the Indian Buffet Process the authors further adopt Beta hyperpriors for the parameters of the Bernoulli distribution and also propose a way to set up the model such that it allows for input specific priors over the Bernoulli distributions. They then provide the necessary details for their variational approximations to the posterior distributions of both such models and experimentally validate their performance on the tasks of MNIST and CIFAR 10/100 classification.\", \"This work is in general well written and conveys the main ideas in an clear manner. Furthermore, parametrising conditional group sparsity in a Bayesian way is also an interesting venue for research that can further facilitate for computational speedups for neural networks. The overall method seems simple to implement and doesn\\u2019t introduce too many extra learnable parameters.\", \"Nevertheless, I believe that this paper needs more work in order to be published. More specifically:\", \"I believe that the authors need to further elaborate and compare with \\u201cGeneralized Dropout\\u201d; the prior imposed on the weights for the non-dependent case is essentially the same with only small differences in the approximate posterior. Both methods seem to optimise, rather than integrate over, the weights of the network and the main difference is in how to handle the approximate distributions over the gates. Why would one prefer one parametrisation rather than the other? Furthermore, the authors of this work argue that they employ asymptotically unbiased gradients for the binary random variables, which is incorrect as the continuous relaxation provides a biased gradient estimator for the underlying discrete model.\", \"At section 3.2 the authors argue about the inherent sparsity inducing nature of the IBP model. In the finite K scenario this is not entirely the case as sparsity is only encouraged for alpha < K.\", \"At Eq. 11 the index \\u201cn\\u201d doesn\\u2019t make sense as the Bernoulli probability for each point depends only on the global pi_k. Similarly for Eq. 12.\", \"Since you tie q(z_nk|pi_k) = p(z_nk|pi_k) then it makes sense to phrase Eq.16 as just D_KL(q(pi) || p(pi)). Furthermore, I believe that you should properly motivate on why tying these two is a sensible thing to do.\", \"Figure 1 is misleading; you start from a unimodal distribution and then you simply apply a scalar scale and shift to the elements of that distribution. The output of that will always be a unimodal distribution but somehow you end up with a multimodal distribution on the third part of the figure. As a result, I believe that in this case you will not have two clear modes (one at 0 and one at 1) when you apply the hard-sigmoid rectification.\", \"The motivation for 21 seems a bit confusing to me; what do you mean with insignificant dimensions? What overflow does the epsilon prevent? If the input to the hard sigmoid is a N(0, 1) distribution then you will approximately have 1/3 of the activations having probability close to 1. Furthermore, it seems that you want beta to be small / negative to get sparse outcomes but the text implies that you want it to be large.\", \"It would be better to rewrite eq. 22 to include also the fact that you have a separate z per layer as currently it seems that the there is only one z. Furthermore, you have written that the variational posterior distribution depends on x_n on the RHS but not on the LHS.\", \"Above eq. 23 seems that it should be q(z_nk| pi_k, xn) = p(z_nk| pi_k, xn) rather than q(z_nk| pi_k) = p(z_nk| pi_k, xn)\", \"Regarding the experiments; the MNIST results are not particularly convincing as the numbers are, in general, similar to other methods. Furthermore, Figure 2 is a bit small and confusing to read. Should FLOPS be on the y-axis or something else? Almost zero flops for the original model doesn\\u2019t seem right. Finally, at the CIFAR 10/100 experiment it seems that both BB and DBB achieve the best performance. However, it seems that the accuracy /sparsity obtained for the baselines is inferior to the results obtained on each of the respective papers. For example, SBP managed to get a 2.71x speedup with the VGG on CIFAR 10 and an error of 7.5%, whereas here the error was 8.68% with just 1.34x speedup. The extra visualisations provided at Figure 3 do look interesting though as it shows what the sparsity patterns learn.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novel idea to improve compression with data-dependent structured dropout. Redundant references to IBP. Missing some experiments with start-of-the-art bayesian compression methods.\", \"review\": \"Summary\\n------------------\\n\\nThe authors propose a new method to sparsify DNNs based on a dropout induced by a Beta-Bernoulli prior. They further propose a data-dependent dropout by linking the Beta-Bernoulli prevalence to the inputs, achieving a higher sparsification rate. In the experimental section they show that the proposed method achieves better compression rates than other methods in the literature. However, experiments against some recent methods are missing. Also, some additional experiments using data-dependent dropouts not based on the Beta-Bernoulli prior would help to better disentangle the effects of the two contributions of the paper. Overall, the paper is well-written but the mentioning of the IBP is confusing. The authors devote quite a bit of space to the IBP when it is actually not used at all.\\n\\n Detailed comments\\n-------------------------\\n\\n1)\\tIntroduction\", \"the_paper_is_well_motivated_and_the_introduction_of_the_paper_clearly_states_the_two_main_contributions_of_the_paper\": \"a Beta-Bernoulli dropout prior and a dependent Beta Bernoulli dropout prior.\\n\\n2)\\tBackground\\n\\nSection 3.1 is a nice summary of variational inference for BNNs. On the other hand, Section 3.2 is misleading. The authors use this section to introduce the IBP process (a generative sequential process to generate samples from a random measure called the Beta-Bernoulli process). However, this is not used in the paper at all. Then they introduce the Beta-Bernoulli prior as a finite Beta-Bernoulli process. I find this quite convoluted. I would suggest to introduce the Beta-Bernoulli distribution as a prior directly, and state that for alpha/K this is a sparse-inducing prior (where the average number of features is given by \\\\frac{\\\\alpha}{1 + \\\\frac{\\\\alpha}{K} ). No need to mention the IBP or the Beta Bernoulli process. \\n\\n3)\\tMain Contribution\\n\\nI think the design of a link function that allows to implement a data-dependent Beta-Bernoulli dropout is one of the keys of the paper and I would suggest that the author clearly state this contribution at the beginning of the paper. I would also like to see the application of this link-function to other sparsity inducing priors different than the Beta-Bernoulli. This would allow to further understand the data-dependent contribution to the final performance and how transferable this is to other settings. Also, Have the authors try to train the data-dependent Beta-Bernoulli from scratch, i.e. without the two steps approach? I am assuming the performance is worse, but I would publish the results for completeness.\\n\\n4)\\tExperiments\", \"the_main_issues_with_the_experimental_section_are\": \"a)\\tI am missing some recent methods (some of them even cited in the related work section): e.g. Louizos et al. (2017). I would be interested in comparisons against the horshoe-prior and a data-dependent version of it. Also, a recent paper based on the variational information bottleneck have been recently published outperforming the state of the art in the field (http://proceedings.mlr.press/v80/dai18d.html).\\nb)\\tTable 1 should report the variance or other uncertainty measure: Given that they run the experiments 5 times, I do not understand why they only report the median. I would encourage the authors to publish the mean and the variance (at least).\\nIn addition, one of my main question about the method is, once the network has been sparsified, how does this translate into a real performance improvement (in terms of memory and speed). In term of memory, you can always apply a standard compression algorithm. If the sparsity is about a certain threshold, you can resort to sparse-matrix implementations. However, regarding the speed only when you reach a certain sparsity level you would get a tangible improvement if your DL framework support sparse matrices. However, if you get an sparsity level below this threshold, e.g. 20%, you cannot resort to sparse matrices and therefore you would not get a speed improvement, unless you enforce structure sparsity or you optimize to low-level matrix multiplication routines. Are the Speedup/Memory results reported in Table 1 real or theoretical?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.