forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
HkgxheBFDS | Undersensitivity in Neural Reading Comprehension | [
"Johannes Welbl",
"Pasquale Minervini",
"Max Bartolo",
"Pontus Stenetorp",
"Sebastian Riedel"
] | Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input. Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not. In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should. We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability. We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions. This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question. Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data. Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1. | [
"reading comprehension",
"undersensitivity",
"adversarial questions",
"adversarial training",
"robustness",
"biased data setting"
] | Reject | https://openreview.net/pdf?id=HkgxheBFDS | https://openreview.net/forum?id=HkgxheBFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"B2RWA3OW45",
"HyeC8utKiS",
"Hyg5owFYjH",
"ByxhXUFtoS",
"SkgpGiD0tB",
"HJl-ZkURYB",
"BkeX04NtKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798751201,
1573652565837,
1573652386207,
1573652003558,
1571875605308,
1571868409299,
1571534026846
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2524/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2524/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2524/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2524/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper investigates the sensitivity of a QA model to perturbations in the input, by replacing content words, such as named entities and nouns, in questions to make the question not answerable by the document. Experimental analysis demonstrates while the original QA performance is not hurt, the models become significantly less vulnerable to such attacks. Reviewers all agree that the paper includes a thorough analysis, at the same time they all suggested extensions to the paper, such as comparison to earlier work, experimental results, which the authors made in the revision. However, reviewers also question the novelty of the approach, given data augmentation methods. Hence, I suggest rejecting the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response #2\", \"comment\": \"Thank you for your review.\\n\\nYes, one could expect that the model changes its prediction probabilities as entities in the question are exchanged, however models trained on both SQuAD2.0 and NewsQA are expected to detect when a question is unanswerable, as this is explicitly annotated and a requirement to complete the task. This runs contrary to our initial intuition.\\nIt is also not immediately obvious that models trained with a defence against this problem show substantial improvements in a biased data setting (Section 6.2), or whether sample attackability transfers between models.\\n\\nFollowing your suggestion we computed the same attack on the RoBERTa model (Liu et al. 2019, https://arxiv.org/abs/1907.11692), and find a similar picture in terms of model vulnerability. For example, with an attack budget of (rho=6, eta=256) we again find a substantial, but notably lower number of vulnerable samples (34.5% for RoBERTa, compared to 54.7% with BERT). Moreover, we find that vulnerability is transferable between models: those samples vulnerable under RoBERTa have a vulnerability rate of 90.7% on BERT, and 17.5% of concrete attacks transfer (Section 6.4). \\nThis suggests that besides stronger nominal performance on a variety of tasks, RoBERTa is also more robust than BERT w.r.t. undersensitivity attacks, and that particular undersensitivity blind spots are shared between models despite their significant difference in absolute performance. This might be related to the particular inductive bias of them sharing the same model category, but we leave such an investigation for future work.\\n\\nRegarding your second comment 2), we had randomly sampled 100 samples of successful attacks for each of the two analyses. As a concrete example, a breakdown can then be computed as follows: with 51% valid PoS attacks, and a vulnerability of 95% (eta=256, rho=6), ~48% of attacks are valid (0.51*0.95=0.4845), which is about half of all samples. Note that there are usually several successful attacks per sample, whereas this analysis only considers a single attack; this number is thus rather a lower bound on the extent of this vulnerability.\\n\\nThank you for engaging with our work and your feedback.\"}",
"{\"title\": \"Response #3\", \"comment\": \"Dear Reviewer #3,\\n\\nThank you for so thoroughly engaging with our paper and your constructive criticism.\\n\\nWe will address your concerns about i) impact ii) experimental methodology iii) comparison with Lewis and Fan (2018).\\n \\ni) You are right, data augmentation itself is by no means a new approach. It is a commonly used strategy to defend against adversaries, which is why we investigated it as a baseline for adversarial defence. We don\\u2019t see data augmentation itself as a core contribution of our work and have further clarified this in the updated version of the paper, by explicitly relating our work to Zhao et al. (2018) and Lu et al. (2018).\\nInstead, we see our main contributions as i) establishing and measuring model undersensitivity as a problem, with concrete strategies for deriving altered questions where the model fails ii) investigating how well-established defence methods (such as augmentation) can mitigate the problem iii) relating model undersensitivity to faulty predictive behaviour, which is improved under the robust model (see the biased data experiment in Section 6.2, and the new experiment on AddSent and on AddOneSent (Section 6.3). We also report several new and interesting observations regarding generalisation to held out perturbations (Section 6.1) the adversarial datasets from Jia et al. 2017 (Section 6.3) and investigate the transfer of attackable samples between models (Section 6.4). \\n\\nFor example, the following attack works for _both_ RoBERTa and BERT:\", \"text\": \"\\u201cJames Hutton is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. [...]\\u201d\", \"original\": \"\\u201cIn 1785 James Hutton presented what paper to the Royal Society of Edinburgh?\\u201d\", \"attack\": \"\\u201cIn 1785 Jacob Ettlinger presented what paper to the Royal Society of Edinburgh?\\u201d\", \"prediction\": \"\\u201cTheory of the Earth\\u201d\\n\\n\\nii) You raised an excellent point regarding the split between counterfactual examples evaluated during training and evaluation. We had not considered this thus far, and the previous experiments presented in the paper all use the same perturbation space both when computing attacks at (adversarial/augmentation) training time, and when measuring robustness at test time. You are correct, models can then potentially only learn to adapt to the particular perturbation space given during training, while failing to generalise their robustness to a different attack space used at evaluation time. \\nTo address this concern, we conducted a new set of experiments (Section 6.1), where we test all models on an entirely new perturbation space, entirely disjoint from the one used during training: interestingly, the results are very similar to those observed for the first attack space.\\nConcretely, we collected new sets of entities from English Wikipedia articles. Then, we randomly selected exactly as many entities per entity type as used during training (e.g. for the named entity type organisation (ORG) there are 26,014 possible perturbations in the attack space at training time, and the new attack space used in our evaluation would also have 26,014 organisations, and again are disjoint from those previously used). Interestingly we observed that examples prone to be vulnerable to an undersensitivity attack w.r.t one attack space are also prone to an attack in the new space, suggesting that the problem is sample-specific, rather than attack space-specific. Thank you very much for this excellent suggestion. We have updated the paper to include these experiments.\\n\\niii) Finally, you are right in your observation in regards to the comparison with Lewis and Fan (2018). We had opted for a different experimental setup than Lewis and Fan (2018) to avoid test leakage, and split off part of the data for validation to avoid tuning models on the same dataset that is also used for evaluation. Adopting the same setting used by Lewis and Fan (2018), we again observe relative improvements (0.3, +2.4, +10.0) F1 for the \\u2018person\\u2019, \\u2018date\\u2019, and \\u2018numerical\\u2019 subtasks, respectively, and Lewis and Fan (2018) is outperformed on two of the three subtasks. For completeness, we have updated the paper and included these results in the Appendix to make our work directly comparable to previous work.\\n\\nWe hope that our response and new experiments fully addresses your concerns and thank you for your feedback that allowed us to considerably improve the experimental rigour of our work.\"}",
"{\"title\": \"Response #1\", \"comment\": \"Dear Reviewer #1\\n\\nThank you for your time and comments on our work. \\n\\nActing on your suggestions and those from other reviewers, we further investigated the generalisation ability of the robust model. We found that it generalises better than the standard model when tested on the AddSent (66.0 to 70.3 F1) and AddOneSent (74.9 to 76.5 F1) datasets, where it -- to the best of our knowledge -- sets a new state of the art.\\nWe also investigated transfer of attacks, and found that concrete samples that were attackable under Roberta were also attackable under BERT, and for 17.5% even with the same attack. This indicates that the attacks might be specific to samples rather than particular models. Further details can be found in Sections 6.3 and 6.4.\\n\\nYes, you are correct: other types of linguistically informed perturbations are conceivable, especially those with richer (e.g. antonym) annotations in WordNet, or constituent spans. Gathering collections of constituent spans is error-prone, and we would expect there to be a substantial proportion of the perturbed questions that are not well-formed, which we already observed with PoS. One would need to work around this problem, which would require a different approach, but we believe that it would be interesting to see how models behave in this setting. However, we believe that the contributions made in this work are sufficient to warrant publication, and thus leave this analysis for future work.\\n\\nWe would like to try and answer your concern regarding sub-optimality, but are unsure if we fully address your question. \\nAll adversarial examples x\\u2019 are generally computed using fully trained models (after fine-tuning on the QA task), and reflect the model in its fully optimised state. \\nThere is, however, one exception. For adversarial training (and only in this case), we compute adversarial attacks throughout training. That is, early during adversarial training, when the model has not yet learned to fit the QA task well, it will already be used to compute adversarial attacks x\\u2019, and try to improve its adversarial vulnerability w.r.t. this currently sub-optimal model. Throughout training the model then improves on the main QA task, and adversarial examples will be computed w.r.t. this gradually improving model \\u2014 until convergence. If we misunderstood your question we are more than happy to elaborate on this further.\\n\\nIt is an interesting question how adversarial robustness and standard test metrics interact. Some prior work has pointed out that there is \\u201can inherent tension between the goal of adversarial robustness and that of standard generalization\\u201d (\\u201cRobustness May Be at Odds with Accuracy\\u201d by Tsipras et al. ICLR 2019 https://openreview.net/forum?id=SyxAb30cY7). And indeed, in our case, when we reduce a model\\u2019s ability to exploit spurious predictive cues, test set accuracy on the HasAns cases deteriorates, though with a slight overall improvement when unanswerable questions are included in the evaluation.\\n\\nWhat is, in our view, interesting, is that by relying less on spurious predictive cues the model behaviour changes qualitatively. Two particular insights we drew from the biased data experiment (Section 6.2) are that: i) The standard model does not reliably learn to form predictions that are specific to the entity in question ii) The robust model partly overcomes this, since becoming more robust correlates with taking into account the specific entities in question.\\n\\nFinally, thank you for your suggestion of adding alternative baselines for the adversarial attack. Thus far there is relatively little prior work on model undersensitivity. The first of your suggestions, HotFlip (Ebrahimi et al. 2018), describes an oversensitivity attack, whereas our attacks attempt to find cases of undersensitivity. Universal Adversarial Triggers (Wallace et al. 2019) on the other hand append new text to SQuAD examples that then triggers a model to predict a subspan of that new text. While all falling under the umbrella of \\u2018adversarially chosen examples\\u2019, the underlying model failures are very distinct from the undersensitivity problem we investigated, and we do not see any straightforward way in which these can be applied in the context of undersensitivity.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"As an extension of recent developments on adversarial attacks and defenses, this paper proposes a simple but effective technique called undersensitivity on machine comprehension task, where the input question is changed but the prediction does not change when it should be. They use two linguistically informed tricks; PoS and NER, to produce the perturbations. In addition to that, several techniques are developed for reducing the adversarial search spaces (Eq 1 and 3) and controlling the level of undersensitivity (Eq 2).\\n\\nIn general, the paper is very well written and clear to read. The formulation of the problem is very straightforward, too. I enjoyed reading the overall paper, especially the experimental results, which provides lots of insights about the techniques. The proposed techniques are simple but they are well-executed in the experiment with reasonable justification. Please find my detailed comments below. \\n\\n\\nMethod. \\nI appreciate the simplicity of the proposed models with clear motivations. Also, validation of the approaches is well-executed in the experiment.\\n\\nI like the idea of linguistically-controlled perturbations using PoS and NER. However, there might be many other ways to control it: for example, parsing a sentence using a constituency parser and replacing each phrase with corresponding synonyms/antonyms using WordNet might be interesting. Or, based on the parse, negating the verb might be another way to try. I would expect more linguistically-informed perturbations like these, and I could find some of them from (Kang et al 2018, Ebrahimi et al., 2018). Also, adding a couple of them in the experiment might be interesting to understand the underlying logic of the perturbations. \\n\\nOne major concern of the proposed approach is the sub-optimality by the pre-trained RC model. The undersensitivity (Eq 2) and adversarial search (Eq 3) are calculated by the probability scores predicted by the pre-trained models. This means that producing the new sample x\\u2019 is only based on the correctness of the pre-trained model on new samples generated, which sounds to be unreliable. Moreover, using the samples produced by this sub-optimal model may be very limited to produce samples under the sub-optimal space of questions. I wonder how the authors tackle this issue in the experiment. \\n\\nExperiment\\nAdversarial attacks should show how an existing system is fragile to be attacked, but at the same time augmenting or adversarially training with them needs to improve its generalization power of the system against the attacks. However, many of the adversarial attack papers mostly focus on the former but not the latter part. In this work, authors showed a result of adversarial training/augmentation but its generalization power on original task (i.e., HasAns case) was not that powerful. The unbiased data setup is interesting but still did not provide any insights about generalization from the adversaries. It would be more convincing to see how this generalization from adversarial attacks can take benefits from bit different tasks such as open-end reading comprehension as a perspective of data augmentation. \\n\\nI see no comparison with other attacking/defending methods in Tables 3 and 4. Adding the recent models (Ebrahimi et al., 2018, Wallance et al., 2019) may help understand how the proposed models are more effective than other techniques.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a framework for evaluating the sensitivity of a QA model to perturbations in the input. The core of the idea is that one can replace content words (i.e. named entities and nouns) in questions in such a way that makes QA models more confident of their original answer (despite, presumably, the question now being unanswerable). Replacements are constructed by mining equivalence classes in Squad data (i.e. all words w/ pos = noun are one set). Depending on how many such substitutions are searched over (and whether multiple are applied), one can find at least one such failure in about 50% of cases, on a BERT model trained on Squad2. The paper also proposes a simple mitigation technique: an objective that modifies a given QA example with all possible substitutions and trains for \\\"no answer\\\" (or alternatively substitutions which break the system). Results demonstrate that performance on Squad2 is roughly unchanged while the success rate of the attack is significantly decreased.\\n\\nWhile the idea of forming such equivalence sets is very interesting, my concern with the paper is both in terms of impact and experimental methodology.\", \"impact\": \"the method is essentially a data augmentation approach over a fixed list of words. This isn't very different than what was proposed in https://arxiv.org/pdf/1804.06876.pdf and https://arxiv.org/pdf/1807.11714.pdf . While there are some nice nuggets in the analysis, in particular that model confidence is a factor for the attack, I'm not sure anything very novel is being proposed.\", \"experimental_methodology\": \"Other works in this vein explicitly create a split between counterfactual examples evaluated at train vs at test. The methodology proposed here requires a search where there isn't a clear split between what aspects of the search are allowed at train vs test. In doing counterfactual data augmentation, it is possible the model observes most elements of the search that will be evaluated at test time, making it almost inevitable that the search will be less successful after the model is trained. A simple solution would be splitting the equivalence sets into train/test. I was not able to confirm whether or not this happened from the paper.\\n\\nThat being said, the paper did evaluate on Lewis&Fan( https://openreview.net/pdf?id=Bkx0RjA9tX ) 's bias training simulation, which I appreciate, but I was disappointed that (a) the results from Lewis&Fan were not included for comparison, and when compared the augmentation method proposed here works much worse, in some settings, than generative based training.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies undersensitivity of the neural models for reading comprehension. First, the neural model is trained on reading comprehension tasks with unanswerable questions. Then, they add perturbations to the input to turn an answerable question into an unanswerable question, using two methods, POS tag based and named entity based. Then, they search for adversarial attacks to find perturbations that the model still predicts the same prediction with even a higher probability. Experiments show that the error rate (attack success rate) is high, over 0.9 with POS tag based method and over 0.5 with named entity based method. Finally, this paper shows data augmentation and adversarial training for this perturbation help the model to be more robust, especially in a biased data scenario.\", \"the_contribution_of_this_paper_is_clear_to_me\": \"it is one of the first studies which investigates undersensitivity of the model when the input text after the perturbation is complete (e.g. in contrast to Feng et al 2018 and other related work where the perturbation causes the input text to be incomplete).\", \"the_weakness_of_this_paper_is\": \"1) the observations are somewhat obvious: it is hard to expect the model to always assign lower probabilities to the original answer when, for example, the named entity in the question is replaced to entities with the same type. Also, I think the observation could be more interesting if the adversarial attack works across different models.\\n2) Table 2 shows that the perturbation does not always work; especially with POS based method, only half of cases work. How many samples were used for this analysis? Is there a breakdown of the error rate (attack success rate) showing that the rate is still significant for valid perturbations? I think it is significant since perturbations seem to cause invalid attack with a pretty high probability.\\n\\nDespite the weakness, I think this paper demonstrates comprehensive studies on this focused area and is worth to be published in ICLR overall.\"}"
]
} |
rJxe3xSYDS | Extreme Classification via Adversarial Softmax Approximation | [
"Robert Bamler",
"Stephan Mandt"
] | Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines.
| [
"Extreme classification",
"negative sampling"
] | Accept (Poster) | https://openreview.net/pdf?id=rJxe3xSYDS | https://openreview.net/forum?id=rJxe3xSYDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3SrwpJ5YXfk",
"U8NA3EqwE6",
"rye6PDeniH",
"SyxHbQehjr",
"Syxtigl2sr",
"ByeL_F-q9B",
"H1gf7y-RtB",
"H1lmZ7TpKr"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1699890097206,
1576798751168,
1573812069440,
1573810940868,
1573810337275,
1572637038479,
1571847961599,
1571832571184
],
"note_signatures": [
[
"~Gladis_Ne_Limes1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2523/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2523/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2523/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2523/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"re\", \"comment\": \"With a dedicated team or agency, website development projects can progress more rapidly https://mlsdev.com/blog/how-to-outsource-web-development . This is especially beneficial for businesses with tight deadlines or those looking to launch their online presence quickly. Scalability: Outsourcing offers scalability, allowing businesses to scale up or down based on project requirements. This flexibility is particularly valuable for companies with fluctuating development needs. Global Perspective: Outsourcing provides access to a global perspective, bringing in diverse ideas and insights. This can contribute to a more innovative and well-rounded website development process.\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a fast training method for extreme classification problems where number of classes is very large. The method improves the negative sampling (method which uses uniform distribution to sample the negatives) by using an adversarial auxiliary model to sample negatives in a non-uniform manner. This has logarithmic computational cost and minimizes the variance in the gradients. There were some concerns about missing empirical comparisons with methods that use sampled-softmax approach for extreme classification. While these comparisons will certainly add further value to the paper, the improvement over widely used method of negative sampling and a formal analysis of improvement from hard negatives is a valuable contribution in itself that will be of interest to the community. Authors should include the experiments on small datasets to quantify the approximation gap due to negative sampling compared to full softmax, as promised.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Clarification of Our Paper's Focus and Our Contributions\", \"comment\": \"We thank the reviewer for pointing us to an extensive list of literature. We admit that related work in the wider field of extreme classification is not sufficiently acknowledged in the Related Work section of our paper and we will add a discussion that includes the suggested references in the final version of our paper. We would like to clarify that our paper is about negative sampling and we would like to stress the contributions in this context.\\n\\nFirst, we believe that our paper significantly advances the theoretical understanding of negative sampling, with practical consequences. We would like to point out in particular Theorem 2. To the best of our knowledge, this is the first formalization and rigorous proof of the intuition that \\u201chard\\u201d negative samples are \\u201cbetter\\u201d. While this intuition has been invoked in the literature before, the notions of \\u201chard\\u201d and \\u201cbetter\\u201d are usually somewhat fuzzy. Our paper formalizes this intuition by defining a well-motivated scalar measure of the signal-to-noise ratio and proving rigorously that this ratio is optimal for negative samples that are \\u201chard\\u201d in a well-defined way. This theoretical insight has practical consequences as it allowed us to design a very simple yet effective way to generate near-optimal negative samples.\\n\\nSecond, we provide experimental results on two established benchmarks and compare against five baselines. We agree that three of our baselines are variants of negative sampling. This is because our paper proposes a simple improvement of negative sampling.\\nNegative sampling is a very popular method in practice due to its simplicity (for example, it is very common in the knowledge graph embedding literature).\\n\\nFinally, the auxiliary model proposed in our paper has merit on its own. It is a simple model that can be fitted deterministically and highly efficiently, requiring tuning of only a single model hyperparameter and no hyperparameters for the training schedule.\\n\\nGiven these contributions, the paper\\u2019s focus on negative sampling, and the already extensive baseline comparisons, we respectfully disagree with the reviewer\\u2019s request for more comparisons to non-negative-sampling methods. We acknowledge that the extreme classification community has developed many algorithms with high predictive accuracy. We hope that the community will appreciate that our paper proposes a very simple, theoretically well-founded, and (as the reviewer acknowledges) faster alternative that significantly outperforms very popular approaches similar to it. We also believe that single-label classification is of enormous practical relevance, and that our proposal should not be dismissed on the grounds that it cannot also perform multi-label classification.\\n\\nHowever, we would like to thank the reviewer for their idea to run experiments on the smaller EURLex data set. Although negative sampling in this regime is somewhat artificial, a smaller data set allows us to compare against full softmax classification, providing insight into the approximation gap due to negative sampling in general. We are running these experiments and will report results in the final version of the paper. We apologize that we could not finish these experiments in time for the rebuttal deadline.\"}",
"{\"title\": \"\\\"Negative Sampling\\\" vs. \\\"Sampled Softmax\\\"\", \"comment\": \"We thank the reviewer for pointing us to relevant additional literature. While we believe that there is a confusion between \\u201cnegative sampling\\u201d (used in our paper) and \\u201csampled softmax\\u201d (used in [3] and [4]), see below, we still find the references relevant and we uploaded a new version of our paper that discusses them in the Related Works section. The updated version also fixes the grammar errors and issues with Table 1 kindly pointed out by the reviewer.\\n\\nWe like the reviewer\\u2019s idea of evaluating our method on smaller data sets where the full softmax loss can be optimized. Although negative sampling in this regime feels somewhat artificial and we don\\u2019t expect it to perform as well as real softmax classification, such experiments provide insight into the approximation gap due to negative sampling in general. We are running experiments on the smaller EURLex data set and will report results in the final version of the paper. We apologize that we could not finish these experiments in time for the rebuttal deadline.\\n\\nBefore addressing the issue of \\u201cnegative sampling\\u201d vs. \\u201csampled softmax\\u201d (Refs. [3] and [4] in the review), we would like to stress the theoretical contributions of our paper, in particular Theorem 2. To the best of our knowledge, our paper provides the first formalization and rigorous proof of the intuition that \\u201chard\\u201d negative samples are \\u201cbetter\\u201d. While this intuition has been invoked in the literature before, the notions of \\u201chard\\u201d and \\u201cbetter\\u201d are usually somewhat fuzzy. Our paper formalizes this intuition by defining a well-motivated scalar measure of the signal-to-noise ratio and proving rigorously that this ratio is optimal for negative samples that are \\u201chard\\u201d in a well-defined way. This theoretical insight has practical consequences as it allowed us to design a very simple yet effective way to generate near-optimal negative samples.\\n\\n\\nNEGATIVE SAMPLING VS. SAMPLED SOFTMAX\\n\\nReferences [3] and [4] in the review both discuss nonuniform sampling for a method called \\u201csampled softmax\\u201d, which is related but different from negative sampling. The main difference is that \\u201csampled softmax\\u201d is biased even under a uniform distribution, whereas \\u201cnegative sampling\\u201d with a uniform noise distribution is unbiased. References [3] and [4] thus use a nonuniform sampling distribution for bias reduction whereas our paper uses it for variance reduction.\\n\\nIn detail, sampled softmax directly approximates the sum over all classes in the softmax loss function (Eq. 1 in our paper) by sampling. This introduces a bias since the sum appears inside the logarithm, which is nonlinear. By contrast, negative sampling does not directly approximate the softmax loss function. Instead, it estimates a different loss function, namely for binary classification (Eq. 2 in our paper). Although the loss function is very different, minimizing it yields the same trained model parameters as minimizing the softmax loss function, as we show in Theorem 1 (in the nonparametric limit). In this sense, negative sampling approximates softmax classification.\\n\\n\\n> In fact, [3] and [4] propose methods to sample negatives from a distribution that closely\\n> approximates the softmax distribution [...] essentially providing the hard negative [...]\\n\\nTo our understanding, [3] and [4] do not generate \\u201chard\\u201d negative samples, i.e., negative samples that resemble positive samples from the data distribution. Their sampling distributions are designed to approximate the model distribution, not the data distribution, which is very different at the beginning of training. Also, their use of a nonuniform sampling distribution is not a means to speed up convergence by reducing gradient noise. It is simply a necessity to make the sampled softmax approximation unbiased (see Theorem 2.1 in [3] and end of Section 2 in [4]).\\n\\n> [...] without having to keep an auxiliary model.\\n\\nWhile the authors do not refer to it as an \\u201cauxiliary model\\u201d, the \\u201csummary vector\\u201d z in Eq. 8 of [3] serves an equivalent purpose. The vector even has to be updated in an expensive operation during training of the main model because the sampling distribution in sampled softmax has to follow the (changing) model distribution. By contrast, our auxiliary model can be kept static, thus simplifying training and leading to a well-defined (static) loss function.\\n\\n\\nCONCERNING REFERENCES [1] and [2]\\n\\nRefs. [1] and [2] are orthogonal to our approach: [1] generalizes negative sampling to a \\u201ctop-k\\u201d ranking task but uses only uniform sampling; [2] does not use sampling as far as we can tell. It instead focuses on a deterministic approximation of the softmax loss that is engineered for a computational model of a GPU.\"}",
"{\"title\": \"Thank you for the positive review\", \"comment\": \"We thank the reviewer for their favorable review. We are happy to hear that our derivations are comprehensible to a (self-proclaimed) non-expert. We believe that the approachability is also a strength of the proposed method: due to its simplicity, the method can be used as a drop-in replacement for the widely used negative sampling approach.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work addresses the problem of training softmax classifiers when the number of classes is extreme. The authors improve the negative sampling method which is based on reducing the multi-class problem to a binary class problem by introducing randomly chosen labels in the training. Their idea is generating the fake labels nonuniformly from an adversarial model (a decision tree). They show convincing results of improved learning rate.\\nThe work is very technical in nature, but the proposal is presented in detail and in a didactic way with appropriate connections to alternative methods, so that it may be useful for the non-expert (as me).\", \"that_is_the_reason_why_i_recommend_to_accept_this_work\": \"even not being an expert I found the paper educative in introducing the problem and interesting in explaining the proposal.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper focuses on efficient and fast training in the extreme classification setting where the number of classes C is very large. In this setting, naively using softmax based loss function incurs a prohibitively large cost as the cost of computing the loss value for each example scales linearly with C. One way to circumvent this issue is to only utilize a small subset of negative classes during the loss computation. However, uniformly sampling this subset from all the negative classes suffers from the slow convergence as such sampled negatives are not very informative for the underlying classification task.\\n\\nThe paper proposes a method to sample the negatives in a non-uniform manner. In particular, given an example, an adversarial auxiliary model that is tasked with tracking the data distribution samples the hardest (adversarial) negatives for the example. The proposed method to sample negatives has a computational cost log(C) and reduces the noise in the gradient. The authors then demonstrate the utility of their proposed approach on two well-established extreme classification datasets, i.e., Wikipedia-500K and Amazon-670K. The proposed method shows improvement over some natural baselines in terms of the wall-time for the convergence of the training process.\\n\\nComments\\n\\n1. The paper has some nice contributions and discusses the key ideas in reasonable detail. However, the reviewer feels that the authors gloss over many relevant prior works and fail to put their results in the right context. There has been quite a bit of work on non-uniformly sampling \\\"hard\\\" negative classes. For example, see [1], [2], [3], [4]. In fact, [3] and [4] propose methods to sample negatives from a distribution that closely approximates the softmax distribution at the cost that scales logarithmically in C, essentially providing the hard negative without having to keep an auxiliary model. Can the authors discuss their work in the context of these works?\\n\\n[1] Reddi et al., Stochastic Negative Mining for Learning with Large Output Spaces.\\n[2] Grave et al., Efficient softmax approximation for GPUs.\\n[3] Blanc and Rendle, Adaptive Sampled Softmax with Kernel-based Sampling.\\n[4] Rawat et al., Sampled Softmax with Random Fourier Features.\\n \\n2. In experiments, the authors do not include the performance of softmax loss (eq. (1)) due to its large computational cost. However, it would be nice to compare the proposed method with eq. (1) at least for slightly smaller datasets from the extreme classification repository. \\n\\n3. In Sec. 4, \\\"We formalize and proof...\\\" --> \\\"We formalize and prove...\\\"\\n\\n4. In Sec. 1, \\\"We present experiments on several two classifications...\\\" ---> \\\"We present experiments on two classifications...\\\"\\n\\n5. Table 1 seems to have some typos. E.g., N is the same for both the data sets. Please fix these issues.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a method for negative sampling for softmax when dealing with classification of data to one from a large number of classes. Its main idea is to negative sample those classes which lead to higher signal to noise ratio than for uniform negative sampling. This is based on building an auxilary model using decision tree from which the adversarial negative classes are sampled, so that the distribution of the negative samples can be close to the positive ones leading to higher SNR while training. The proposed method is compared to other methods for negative samping on two publicly available large-scale datasets from the extreme classification with XML-XNN features.\", \"positives\": \"1. The proposed approach with adversarial negative sampling using an auxilary model seems interesting \\n2. It scales well to datasets with large number of classes.\", \"negatives\": \"\", \"the_experimental_evaluation_of_the_proposed_approach_lacks_completeness_and_does_not_look_convincing_for_the_following_reasons\": \"1. It misses out a recent state-of-the-art method (Slice) for negative sampling on same datasets [1], which also addresses the same problem of sampling most promising negative classes but in a different way. Furthermore, [1] also compares against many other sota methods missed out in this paper on the many other datasets datasets including those in this paper but in a more general multi-label setting.\\n2. The paper only compares against other negative sampling approaches such as AandR, NCE, and does not show what happens when no negative sampling is done such as done in (DiSMEC) [2]. This is important to understand what (if at all) is lost by doing approximation as proposed. For instance, a quick experiment reveals that DiSMEC can give about 19% accuracy on Wiki500 dataset, which is better than that achieved by the proposed method. Though it is computationally expensive but due to its simplicity, it must be discussed nevertheless to give a complete picture.\\nInstead the OVE baseline used in the paper seems quite sub-optimal in the first place, and hence stronger baselines [1,2] for which the code and results are readily available and have been duly tested in the community must be used and discussed.\\n\\nAnother aspect that the paper misses out is the role of fat-tailed distribution [3,4] of the instances among labels, which is a property of typical datasets in this regime. It is possible that one can get good accuracy but poor performance on tail-labels due to approaximations. The performance on tail-labels on appropriate metrics other than accuracy, such as MacroF1, should be evaluated.\\n\\nAlso, the proposed approach must be tested on more datasets including the smaller ones such as EURLex (also used in works referenced in the paper) on which it is easier to compare with other methods (such as DiSMEC, Slice and AttentionXML [5]) without encountering computational constraints and also bigger ones such as Amazon3M, also avilable from the repository. \\n\\nFinally, it must be investigated if the proposed method can be extended to the multi-label setting or are there inherent limitations of the model in this setting. The possibility to extend it to the general multi-label setting would make this approach more promising and directly comparable to wide range of algorithms.\\n\\n[1] H. Jain, V. Balasubramanian, B. Chunduri and M. Varma, Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches, in WSDM 2019.\\n[2] R. Babbar, and B. Sch\\u00f6lkopf, DiSMEC - Distributed Sparse Machines for Extreme Multi-label Classification in WSDM, 2017.\\n[3] H. Jain, Y. Prabhu, and M. Varma, Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications in KDD, 2016.\\n[4] R. Babbar, and B. Sch\\u00f6lkopf, Data Scarcity, Robustness and Extreme Multi-label Classification in Machine Learning Journal and European Conference on Machine Learning, 2019.\\n[5] AttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Networks, NIPS 2019\"}"
]
} |
S1ly2grtvB | IS THE LABEL TRUSTFUL: TRAINING BETTER DEEP LEARNING MODEL VIA UNCERTAINTY MINING NET | [
"Yang Sun",
"Abhishek Kolagunda",
"Steven Eliuk",
"Xiaolong Wang"
] | In this work, we consider a new problem of training deep neural network on partially labeled data with label noise. As far as we know,
there have been very few efforts to tackle such problems.
We present a novel end-to-end deep generative pipeline for improving classifier performance when dealing with such data problems. We call it
Uncertainty Mining Net (UMN).
During the training stage, we utilize all the available data (labeled and unlabeled) to train the classifier via a semi-supervised generative framework.
During training, UMN estimates the uncertainly of the labels’ to focus on clean data for learning. More precisely, UMN applies the sample-wise label uncertainty estimation scheme.
Extensive experiments and comparisons against state-of-the-art methods on several popular benchmark datasets demonstrate that UMN can reduce the effects of label noise and significantly improve classifier performance. | [
"Semi-supervised Learning",
"Robust Learning",
"Deep Generative Model"
] | Reject | https://openreview.net/pdf?id=S1ly2grtvB | https://openreview.net/forum?id=S1ly2grtvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_0EKo7_1Cu",
"rkgCOgohjB",
"BygADgi3jS",
"BJgqHgs3oS",
"rJgqWIYnjS",
"SJlrcMYhiH",
"Hye15ApRYr",
"SyeQxtH3tr",
"Hkx78mZ3tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798751138,
1573855349730,
1573855334420,
1573855298107,
1573848577977,
1573847693226,
1571901062547,
1571735787160,
1571717963453
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2522/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2522/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2522/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2522/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents an interesting idea but all reviewers pointed out problems with the writing (eg clarity of the motivation) and with the motivation of the experiments and link to the contest. The rebuttal helped, but it is clear that the paper requires more work before being acceptable to ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Generative Process & our contribution\", \"comment\": \"Q9. About the \\u202fgenerative process compared to (Langevin et al., 2018)\\u2019s paper.\\nIt seems that this is a major concern of the reviewer. We thank the reviewer for drawing this to our attention and allowing us to clarify our contributions. \\nIt is correct that the generative process in Equation (2) resembles (Langevin et al., 2018). However, the key difference is the last component of Equation (3), which describes our generative process of the observed label jointly from the sample and its true label. \\nAfter arriving at Equation (3), we did not follow the common way to factorize the distribution p_\\u03b8(\\\\hat y| y, z_a) as ref [1]. But instead, we computed it with an estimated uncertainty term which ultimately enables evaluation of the probability of individual sample\\u2019s label noise. We provide proof in the Appendix for why our way of estimating label uncertainty (\\\\eps) is a sound approach. \\nBy contrast, Langevin et al., 2018 use pre-defined constant values as the class-wise corruption rates in their generative process which leads to gradient backpropagation being modulated class-wise. They use the same corruption rate for all data samples within a category. However, it is not practical to obtain the corruption rate of mis-labeled data as pre-knowledge. And using a constant value as the corruption rate for all samples of a given class is not precise during the training, e.g., the ratio of mis-labeled and correctly-labeled data in different training batches may be quite different. Which is why we were motivated to model the label uncertainty per sample instead of per class in the generative process.\", \"reference\": \"[1] Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. Masking: A new perspective of noisy supervision. arXiv preprint arXiv:1805.08193, 2018.\"}",
"{\"title\": \"labeled loss term & some derivations & low noise regime & Parametrization of NN\", \"comment\": \"Q5. About the term \\u201clabeled loss term\\u201d.\\nSorry about the confusion. To further clarify, we will include additional introduction right before equation (6).\\u202f The sentence after equation (4) will be rephrased as:\\u202f \\n\\u201cwhere\\u202fq\\u03c6(y|za),\\u202fq\\u03c6(za|x),\\u202fq\\u03c6(zb|za, y),\\u202fp\\u03b8(za|zb, y),\\u202fp\\u03b8(x|za) indicate the classifier, encoder, conditional encoder, conditional decoder and decoder functions respectively. L indicates labeled data. U is unlabeled data and C denotes the set of used classes. The last term in the equation can be treated as the\\u202flabeled\\u202floss term.\\u202fWe will also include the modification in the revised version of the paper.\\u201d \\n \\n \\nQ6. About the \\u202fequation(6), \\u201cI do not understand why f(eps) = log[(C-1)(1-eps)/eps]. Why \\u202fisn\\u2019t\\u202f it equal to log[p(hat(y)|y)] ?\\u201d\\u202f \\nBecause we have to sum over y_k \\u2208 C in the last term of equation(4). \\nFor a given observed class - j, using equation(5) to iterate over all y_k, the last term of the equation (4) reduces to equation(6): \\n q(y_j|z_a)* log[(C-1)(1-eps)/eps] + log(eps/C-1). \\nFor simplicity, subscript \\u201cj\\u201d was not shown in equation (6) and the constant term (log(eps/C-1)) can be ignored as it does not contribute to the gradient backpropagation. \\n \\nQ7. About the comments \\u201clow data-corruption regime, the method appears to be competitive with mean teacher model\\u201d.\\u202f \\nAs listed in Table 1, mean teacher model performs better than UMN with 10% corruption rate in SVHN dataset. But in most cases in the low data-corruption, UMN behaves better than other semi-supervised learning methods, such as Mean Teacher. Although most people think that small noise ratio may not hurt the performance of deep learning models, based on our observations, this assumption is not always true when training with limited amount of data. When the labeled data size is small, neural network tends to easily overfit on these mis-labeled data, which could hurt the model performance. That is one of the major motivations of our work. \\n \\nQ8. \\u201cnaive question: why directly modeling \\\\eps as a function of\\u202fz_a\\u202for (z_a, y), possibly parametrized by a neural network, a bad idea?.\\u201d \\nTo check the feasibility of using a neural network to estimate the uncertainty (\\\\eps) as a function of the input, using our current objective, we conducted an experiment and results are shown below:\", \"approach_a\": \"The \\\\eps estimator was parameterized by 2-layer MLP network with its input as \\u201cz_a\\u201d concatenated with all possible labels and output is a sigmoid function for \\\\eps for each label.\", \"approach_b\": \"The \\\\eps estimator was parameterized by 2-layer MLP network with its input as \\u201cz_a concatenated with the output of \\u201cclassifier\\u201d (y-hidden) and output is a sigmoid for \\\\eps.\", \"comparisions_results\": \"\", \"dataset\": \"MNIST-digit (10 classes)\", \"corruption_rate\": \"50% (labels for 50% of each class were randomly corrupted)\\nThe same hyper parameter settings (for UMN) was used for all experiments.\", \"error_rate\": \"UMN UMN-Approach-A UMN-Approach-B \\n 5% 24.2% 27% \\nWhile the idea of parameterizing \\\\eps using a neural network is interesting, we think it may not be applicable to UMN using the current objective function. It can be derived and shown that with the current objective that the parameterized \\\\eps estimator would be influenced by the parameterized \\u201cclassifier\\u201d - reducing uncertainty for samples that the classifier is confident on and increasing uncertainty for less confident samples. To make it work, it would require an additional supervised term in addition to our current objective function to properly train the network for estimating \\\\eps, which could be explored in future works.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"\", \"question1\": \"Unfortunately, I do not seem to really understand the rationale behind the main novelty (even at\\u202fan\\u202fheuristic level) of the paper which is contained in Section 3.2.1, namely Equation (10) which describes the intensity of the label corruption. Why is it a sensible idea?\", \"answer1\": \"To clarify, our intent was to show that, under the condition of Equation (12), the Guider model (moving average) leads with its prediction and provides a more reliable indicators for the labels of the individual data sample. When the learner model encounters incorrect labels during training, it deviates from the true optimal and this uncertainty is reflected in eq (10), and eq (8) reduces the \\\"misleading\\\" gradient update due to the incorrect labels. Taken the other way around (i.e. when this misleading gradient is diminished), the Guider model accumulates a \\u201cbetter\\u201d gradient and subsequently provides a better label indicator for the learner. \\n \\nQ2. About\\u202f the notations of Equation (10)\\u202f \\nA2 f indicates the model prediction output. Thanks for pointing this out, we will further clarify this part in the revision. \\n \\nQ3. About\\u202f page 4 is difficult to digest.\\u202f\\u202f \\nA3 The details of how each term is derived from the ELBO are described in the Appendix. q is the variational distribution of the hidden variables which we have described in Appendix A.1. q is not fully described in the main text since we followed the convention in the literature of using q to refer to the posterior in the Variational Bayesian. We will provide more details in the main text for better readability. \\n In our work, this q is the distribution of the hidden variable, given the observables. We also describe the p and q distributions as - \\u201cwhere q\\u03c6(y|za), q\\u03c6(za|x), q\\u03c6(zb|za, y), p\\u03b8(za|zb, y), p\\u03b8(x|za) indicate the classifier, encoder, conditional encoder, conditional decoder and decoder functions respectively\\u201d. \\n \\nQ4. About\\u202fsecond line of Equation 14:\\u202f \\nA4 Thanks for pointing this out, the typo has been addressed in the revised version.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Review#1\\nQuestion1. About the comment from the reviewer \\u201cThe proposed method, Uncertainty Mining Net, combines the Mean Teacher method of\\u202f Tarvainen\\u202f and\\u202f Valpola \\u202fwith the M-VAE method of Langevin et al. to estimate the trust worthiness of each input-label pair and weigh its contribution to the loss.\\u201d\\u202f \\nAnswer1\\u202f \\nThanks the reviewer for the insightful comment. To clarify our contribution, we formulate a generative model similar to M-VAE to enable sample-wise label noise estimation. Compared to other approaches of approximating the class-wise noise distribution, UMN is able to estimate the sample-wise label. The sample-wise noise is estimated by labels\\u2019 uncertainty of comparing the learning model with a moving average soft label target (guider). This moving average approach is used to provide a guider of the ground truth label and by contrast, the mean teacher is used to enforce temporal consistency. This pipeline enables robust learning without any prior-knowledge of the corruption ration. At the same time, our model also provides the feedback of label uncertainty which can be used as one way of data cleaning. \\n\\nQuestion2\\u202f \\n\\u201cThe generative process defined in Equations 2 and 3 presents a model for input-dependent label noise, but the corruptions in the experiments are conditionally independent of the input given the true label. What is p(y_tilde\\u202f| y,\\u202fz_a) supposed to capture when the true noise model in the experiments follows p(y_tilde\\u202f| y)?\\u202f\\n\\u201d\\u202f \\nAnswer2\\u202f \\nWe thank the reviewer for raising this question. We hope the following answers can our answer the reviewer\\u2019s concern. To compare with previous approaches and performance evaluation, we set the class-wise corruption ratio. However, as discussed in the paper, UMN can provide the sample-wise corruption rate estimation which can be used to indicate the mislabeled probability for the given data sample. Therefore, our model is not limited to study the mis-labeled data with uniform noise distribution. Suppose that we have a domain expert and we let him/her examine the label. The domain expert will estimate the label correctness based on the sample --this is exactly the role of the moving average model. Finally, we would like to point out that, if we would like to estimate the class-wise noise distribution, we could also estimate the corruption ratio by summing the noise from each individual. \\nQuestion 3\\u202f \\n\\u201cIt seems like the proposed approach would have difficulty with non-uniform label noise, but there is no discussion on this. Adding discussion of this would be good.\\u201d \\nAnswer 3.\\u202f \\nSince UMN is able to deal with the sample-wise label noise, the method could handle the situation where the label noise is non-uniform distributions. For further understanding of UMN\\u2019s performance when dealing with the non-uniform label noise, we have conducted several experiments. The experimental setting and the results are listed as follows: \\nQuestion\\u202f4\\u202f \\n\\u201cWhat is the purpose of\\u202f z_b?\\u202f What is the purpose of z_b? It seems like a redundant variable in the generative process, since it is only used with y to sample z_a, and it seems like z_a could just be sampled from y.\\u201d\\u202f \\nAnswer\\u202f4\\u202f \\nz_b is the output of the encoder module for a given conditional variation auto-encoder. We adopt the idea from Kingma et. al, 2014. As indicated in Kingma et. al, 2014, incorporating z_b to the generative process can help improve the performance of semi-supervised learning. \\nIn our framework, z_a is used for encoding all the data regardless of their labels. z_b is used for generative process conditioned on the correctly estimated label. \\u202fThe reviewer proposed to use z_b directly -- this is an interesting direction that warrants exploration. Based on our current observation that integrating z_a with z_b works better in UMN.\\u202f \\n\\n\\nQuestion5\\u202f \\nThe reviewer is wondering \\u201cDid you run experiments with the consistency loss?\\u201d\\u202f \\n\\n\\u202f \\n\\nAnswer5\\u202f \\nBased on your feedback, we conducted an experiment to evaluate the performance with consistency loss term included.\", \"comparisions_results\": \"\", \"dataset\": \"MNIST-digit (10 classes)\", \"corruption_rate\": \"50% (labels for 50% of each class were randomly corrupted)\\nThe same hyper parameter settings (for UMN) was used for all experiments.\", \"error_rate\": \"UMN UMN-Consistency-loss \\n 5% 35% \\nThe optional consistency loss was not used in our UMN framework. During our development stage, we experimented with it. However, we found that having the consistency loss did not help improve the model performance. The consistency loss would pull the Learner towards the Guider,\\u202fthus diminishing their differences (thus making it not reliable to estimate label uncertainty). We did not anticipate that our \\u201cGuider\\u201d would act as a \\u201cMean Teacher\\u201d (pun intended). \\n\\n\\nQuestion6\\u202f \\nThe reviewer is commenting on the grammar errors\\u202fand typos.\\u202f\\u202f \\n\\u202f\\nAnswer6\\u202f \\nThanks for pointing this out. All the typos have been addressed in the latest version.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"For Review #3.\", \"question_1\": \"It would be better if the authors can further provide the error rate changes under different number of labeled training data. Such analysis would provide suggestions regarding when it is necessary to implement UMN.\", \"answer_1\": \"Thanks for the insightful comment. Following your suggestions, we conduct several experiments accordingly to evaluate the sensitivity of UMN to different number of labeled training data. \\nWe setup three different experiments where the training set includes 200, 300 and 500 labeled data samples respectively, the rest of the data is used as the unlabeled data for the semi-supervised learning. For comparison, we apply the same experimental setting to one of the latest supervised robust learning works \\u2013MentorNet.\\u202f\\u202f \\nThe followings are the error rates for different experimental settings. There is no pre-training in the experiments. \\n\\nSVHN dataset,\\u202f \\nFor 20% of corruption ratio:\\u202f \\n\\n\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f MentorNet\\u202f.\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f UMN.\\u202f \\n100 samples/class:\\u202f\\u202f 30.5% 23.9% \\n200 samples/class:\\u202f 27.9% 18.1% \\n300 samples/class:\\u202f 25.1% 16.3% \\n1000 samples/class: 18.3% 14.8% \\n2000 samples/class: 11.2% 13.2% \\nFrom the results, we can see UMN still works with larger label training data and even achieve better performance compared to MentorNet under most situations. This suggests that the unlabeled data is important for assisting training with noisy labeled data which justify the purpose of our work. We thank again the reviewer for this import piece of advice. \\n\\n \\n\\nQuestion 2.\\u202fIn the real world, the data within the same class might be labeled incorrectly more than correctly. It would be better to investigate if UMN is able to identify such situation and correct the labels accordingly.\", \"answer2\": \"To investigate the performance with larger corruption rates, we have evaluated the model performance on MNIST/SVHN with corruption rate from 60% to 90%. The performance is listed as follows. From the results, we can tell UMN still can achieve reasonable performance under the situations when there are more mis-labeled data than the correctly labeled data. \\n\\n\\u202f \\n\\n\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202fMentorNet,\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f UMN,\\u202f\\u202f\\u202f\\u202f\\u202f \\n60%\\u202f\\u202f\\u202f\\u202f\\u202f\\u202f 67.1% 49.1% (another 50 epochs to run) \\n70%:\\u202f 70.0% 53.1% \\n80%:\\u202f 71.2% 54.5% (another 10 epochs to run) \\n90%:\\u202f --- ---- \\nThanks for these valuable comments, we have included this ablation study in the paper and we will try to add these additional ablation study results to the paper.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed a novel framework, Uncertainty Mining Net (UMN), to address the problem of learning on limited labeled data with label noise. First, UMN applied the unsupervised variational autoencoder to learn a more representative latent feature representation by involving large amounts of unlabeled data. Second, in the training process, ELBO integrates information from all the training data and sample-wise uncertainty estimated from the predictions of the updating model and its exponential moving average is incorporated to enable the learning process to focus on the data with reliable labels. Experimental results show that UMN outpuroms several state-of-the-art methods on multiple benchmark datasets.\\n\\nUMN will be helpful for a lot of real-world products since annotation is expensive and annotation quality is a regular concern. However, it is subjectively uncertain to define whether it is a large or small dataset without considering application context and model complexity. It would be better if the authors can further provide the error rate changes under different number of labeled training data. Such analysis would provide suggestions regarding when it is necessary to implement UMN. On the other hand, it would be nice to explore how tolerant/sensitive the UMN is to the corruption rate to every single class, especially for a multi-classification problem. In the real world, the data within the same class might be labeled incorrectly more than correctly. It would be better to investigate if UMN is able to identify such situation and correct the labels accordingly.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper describes a method for learning in semi-supervised settings with label noise. This is an interesting topic with a relatively scarce literature. The proposed method works by first postulating a generative model for the labelled/unlabeled/label-corrupted data. The model is then fitted using a standard variational lower bound maximization.\\n\\nThe approach is elegant, and appears to work empirically well. Unfortunately, I do not seem to really understand the rationale behind the main novelty (even at an heuristic level) of the paper which is contained in Section 3.2.1, namely Equation (10) which describes the intensity of the label corruption. Why is it a sensible idea? There is only 3 lines of comments after equation 10, which seems a bit short since this is the crux of the paper. Also, the notations of equation (10) are very confusing (to me) since the function f(.) was used before to denote another quantity.\", \"minor_remarks\": \"=============\\n\\n1. page4 was a bit difficult to digest at first reading since the author did not describe in the main text he form (i.e. factorization structure) of the variational distribution q.\\n\\n2. second line of Equation 14 seems to be wrong, while the 1st and last line seems correct -- the authors may want to double check\\n\\n3. the term \\\"labeled loss term\\\" was not properly defined, which makes the reading a bit difficult even if one eventually gets what the authors mean\\n\\n4. In equation (6) I do not understand why f(eps) = log[(C-1)(1-eps)/eps]. Why isnt it equal to log[p(hat(y)|y)] ?\\n\\n5. notations z_a, z_1, etc.. do not seem to be consistent throughout the text\\n\\n6. it is quite surprising to me that, in the low data-corruption regime, the method appears to be competitive/better with consistency-based method such as the MT approach.\\n\\n7. naive question: why directly modeling \\\\eps as a function of z_a or (z_a, y), possibly parametrized by a neural network, a bad idea?\\n\\nIn conclusion, it feels like the method has a lot of potential (in views of the numerical simulations) -- if the authors could clarify the exposition, this could be a very good contribution to the field. The method appears to be conceptually simple (i.e. postulate a generative model + fit it by maximizing the ELBO + one trick to estimate \\\\eps), which is a good thing -- what is missing, I think, is a real discussion of why the proposed manner to estimate \\\\epsilon is sensible.\\n\\nEdit after reading [1] \\n======================\\nThe proposed generative model is same -- the authors should make this very clear in the paper. Although is is acknowledged: \\\"In this work, we implement the idea in (Langevin et al., 2018)\\\" -- this is only in section \\\"4.1 IMPLEMENTATION DETAILS\\\". After reading (1), it is clear that the novelty of the paper us much less than what I had initially thought. The only difference is in Section 3.2.1, and this Section is far from being satisfying.\\n\\n\\n[1] \\\"A Deep Generative Model for Semi-Supervised Classification with Noisy Labels\\\", Langevin & al\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Update after rebuttal:\\nThe rebuttal addressed a few of my concerns, but there is still a major issue. Namely, UMN is claimed to work on sample-wise label noise, but there are no experiments to support this (note that this is different from non-uniform class-dependent label noise). As fixing this would require large modifications to the paper, I am keeping my score at weak reject.\\n\\n----------------------------------------------------------------------------------\", \"summary\": \"This paper presents a method for training classifiers in the setting of semi-supervised learning with noisy labels. The proposed method, Uncertainty Mining Net, combines the Mean Teacher method of Tarvainen and Valpola with the M-VAE method of Langevin et al. to estimate the trustworthiness of each input-label pair and weigh its contribution to the loss. This is not an obvious use of the Mean Teacher method, and it seems like a nice idea.\\n\\nThe results are good when controlling for architecture, and the setting is important and underexplored, but there are several concerns I have with the paper in its current form and some parts I would like clarified. At present, the paper is borderline, and I will raise my score if these are addressed.\", \"major_points\": \"The generative process defined in equations 2 and 3 presents a model for input-dependent label noise, but the corruptions in the experiments are conditionally independent of the input given the true label. What is p(y_tilde | y, z_a) supposed to capture when the true noise model in the experiments follows p(y_tilde | y)?\\n\\nIt seems like the proposed approach would have difficulty with non-uniform label noise, but there is no discussion on this. Adding discussion of this would be good.\\n\\nWhat is the purpose of z_b? It seems like a redundant variable in the generative process, since it is only used with y to sample z_a, and it seems like z_a could just be sampled from y.\\n\\nThe caption in Figure 1 says, \\u201cFor simplicity, we omit the optional consistency loss term between the classifiers of \\u03b8 and \\u03b8 0 for unlabeled data in the figure\\u201d, but this is never mentioned again. I would find it interesting to see the combined effect of the Mean Teacher consistency loss with the VAE reconstruction loss, since they are distinct approaches to semi-supervised learning. Did you run experiments with the consistency loss?\", \"minor_points\": \"The writing is full of grammatical errors and typos, including\\n\\n\\u201cwe experiment a CNN architecture with 13 convolutional neural network as well as a ResNet-101 architecture\\u201d\\n\\n\\u201cthe training of UMN also converge faster\\u201d\\n\\n\\u201cFor the details of the proof, please refer to the appendix section.\\u201d\\n(This quote is from the appendix!)\\n\\n\\u201ceign decomposition\\u201d\\n\\n\\u201cfor solving learning model\\u201d\"}"
]
} |
rkg1ngrFPr | Information Geometry of Orthogonal Initializations and Training | [
"Piotr Aleksander Sokół",
"Il Memming Park"
] | Recently mean field theory has been successfully used to analyze properties
of wide, random neural networks. It gave rise to a prescriptive theory for
initializing feed-forward neural networks with orthogonal weights, which
ensures that both the forward propagated activations and the backpropagated
gradients are near \(\ell_2\) isometries and as a consequence training is
orders of magnitude faster. Despite strong empirical performance, the
mechanisms by which critical initializations confer an advantage in the
optimization of deep neural networks are poorly understood. Here we show a
novel connection between the maximum curvature of the optimization landscape
(gradient smoothness) as measured by the Fisher information matrix (FIM) and
the spectral radius of the input-output Jacobian, which partially explains
why more isometric networks can train much faster. Furthermore, given that
orthogonal weights are necessary to ensure that gradient norms are
approximately preserved at initialization, we experimentally investigate the
benefits of maintaining orthogonality throughout training, and we conclude
that manifold optimization of weights performs well regardless of the
smoothness of the gradients. Moreover, we observe a surprising yet robust
behavior of highly isometric initializations --- even though such networks
have a lower FIM condition number \emph{at initialization}, and therefore by
analogy to convex functions should be easier to optimize, experimentally
they prove to be much harder to train with stochastic gradient descent. We
conjecture the FIM condition number plays a non-trivial role in the optimization. | [
"Fisher",
"mean-field",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=rkg1ngrFPr | https://openreview.net/forum?id=rkg1ngrFPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"0mdh_C8Rn",
"rylzvaO2oB",
"Hkgcx64hoH",
"HkgLkpNnsH",
"B1e_934noS",
"rkl0v34nor",
"B1ePSnNhjr",
"rygBQhV2sS",
"rJeBe342sr",
"SygON0qAYB",
"rylq7Ph2FS",
"ryxt7P3nYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798751106,
1573846361704,
1573829874302,
1573829854450,
1573829776077,
1573829733624,
1573829694715,
1573829660915,
1573829612692,
1571888688065,
1571764001775,
1571764001145
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2521/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2521/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2521/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2521/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"I've gone over this paper carefully and think it's above the bar for ICLR.\\n\\nThe paper proves a relationship between the eigenvalues of the Fisher information matrix and the singular values of the network Jacobian. The main step is bounding the eigenvalues of the full Fisher matrix in terms of the eigenvalues and singular values of individual blocks using Gersgorin disks. The analysis seems correct and (to the best of my knowledge) novel, and relationships between the Jacobian and FIM are interesting insofar as they give different ways of looking at linearized approximations. The Gersgorin disk analysis seems like it may give loose bounds, but the analysis still matches up well with the experiments.\\n\\nThe paper is not quite as strong when it comes to relating the anslysis to optimization. The maximum eigenvalue of the FIM by itself doesn't tell us much about the difficulty of optimization. E.g., if the top FIM eigenvalue is increased, but the distance the weights need to travel is proportionately decreased (as seems plausible when the Jacobian scale is changed), then one could make just as fast progress with a smaller learning rate. So in this light, it's not too surprising that the analysis fails to capture the optimization dynamics once the learning rates are tuned. But despite this limitation, the contribution still seems worthwhile.\\n\\nThe writing can still be improved.\\n\\nThe claim about stability of the linearization explaining the training dynamics appears fairly speculative, and not closely related to the analysis and experiments. I recommend removing it, or at least removing it from the abstract.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to authors\", \"comment\": \"The authors address my questions. I recommend the publication of this paper. I revise the rating from 6 to 8.\"}",
"{\"title\": \"references\", \"comment\": \"[1]R. Karakida, S. Akaho, and S. Amari, \\u201cUniversal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach,\\u201d in The 22nd International Conference on Artificial Intelligence and Statistics, 2019, pp. 1032\\u20131041.\\n[2]A. Jacot, F. Gabriel, and C. Hongler, \\u201cNeural Tangent Kernel: Convergence and Generalization in Neural Networks,\\u201d in Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. Curran Associates, Inc., 2018, pp. 8571\\u20138580.\\n[3]J. Lee, L. Xiao, S. S. Schoenholz, Y. Bahri, J. Sohl-Dickstein, and J. Pennington, \\u201cWide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent,\\u201d p. 18.\\n[4]S. Hayou, A. Doucet, and J. Rousseau, \\u201cOn the Impact of the Activation function on Deep Neural Networks Training,\\u201d in International Conference on Machine Learning, 2019, pp. 2672\\u20132680.\"}",
"{\"title\": \"RE: Reviewer 3\", \"comment\": \"We thank the reviewer for their thoughtful comments.\\n\\nWe agree that understanding the smallest eigenvalue of the Fisher information matrix (FIM) would be invaluable. We are however unfamiliar with any ways of directly lower-bounding that quantity. Under somewhat specific conditions Karakida et al [1] that as $\\\\lambda_{\\\\max}(G) \\\\to \\\\infty$ then $\\\\lambda_{\\\\min} \\\\to 0$. This result hinges on the assumptions that weight matrices are distributed as $ W_{i,j} \\\\sim \\\\mathcal{N} (0, \\\\sigma^2_{l}/ M_{l-1})$ where M_{l-1} is the number of neurons in layer $l-1$ then the mean eigevalue is $|G|_{Fro}$ and the mean eigenvalue is $ \\\\frac{1}{P} |G|_{Fro}$ where is the total number of parameters. Importantly for Gaussian weights the maximum eigenvalue grows as $\\\\mathcal{O}(M)$ but the mean eigenvalue is proportional to $\\\\mathcal{O}(1/M)$ and a fortiori the minimal eigenvalue must decrease at least as fast.\\nWe agree that an upfront discussion of this special case would give the reader a better understanding of the context and potential implications. In a similar vein, we believe that adding a disclaimer that estimating the condition number is generally numerically unstable.\\n\\nWe whole-heartedly agree that the some of the conditions necessary for the NTK may not hold in our experiments, and therefore ammended the manuscript appropriately. The authors in [2,3] consider widths of up to 10,000, compared to ours 400 neuron networks. However, [4] consider networks of similar width to ours.\\nMoreover, it has been pointed out that gradient flow, like any differential equation admits a convergent forward Euler discretization, provided the norm of the eigenvalues of the one-step update obey $ |1 - lambda| \\\\le 1 $. We did not check that these conditions are met; we automatically chose learning rates that produced the highest validation accuracy after 50 epochs. We show the learning rates at the bottom of this response, and will release the entire entire set of hyperparameters as JSON files in the github repo. Moreover, in the amended version of the document we show will that our conjectured explanation hold with the above mentioned qualifications.\\n\\nExample learning rates for CIFAR-10 and h0=0.0009236716627770724 using ADAM\\n\\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.0009236716627770724, \\n \\\"manifold\\\" : \\\"stiefel\\\", \\n \\\"learning_rate_euclidean\\\" : 0.00009405111292996846, \\n \\\"learning_rate_manifold\\\" : 0.000016915446998604953, \\n \\\"learning_rate_scale\\\" : 0.00005043477108079471,\\n \\\"omega\\\" : NumberInt(0), \\n \\\"weight_decay\\\" : NumberInt(0)\\n }\\n \\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.0009236716627770724,\\n \\\"manifold\\\" : \\\"euclidean\\\", \\n \\\"learning_rate_euclidean\\\" : 0.00001434368393061965, \\n \\\"learning_rate_manifold\\\" : 0.0, \\n \\\"learning_rate_scale\\\" : 0.0, \\n \\\"omega\\\" : NumberInt(0), \\n \\\"weight_decay\\\" : NumberInt(0)\\n }\\n\\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.0009236716627770724, \\n \\\"manifold\\\" : \\\"oblique\\\", \\n \\\"learning_rate_euclidean\\\" : 0.000021724540834816572, \\n \\\"learning_rate_manifold\\\" : 0.00000997854628342336, \\n \\\"learning_rate_scale\\\" : 0.00007832919428972671, \\n \\\"omega\\\" : 0.0007680321077268013, \\n \\\"weight_decay\\\" : NumberInt(0)\\n }\\nlearning rates for CIFAR-10 and h0=1/64 using ADAM\\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.015625, \\n \\\"learning_rate_euclidean\\\" : 0.000008289977483912705, \\n \\\"learning_rate_manifold\\\" : 0.000026997346360225043, \\n \\\"learning_rate_scale\\\" : 0.00007146727754603447, \\n \\\"manifold\\\" : \\\"stiefel\\\", \\n \\\"omega\\\" : NumberInt(0)\\n }\\n\\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.015625, \\n \\\"learning_rate_euclidean\\\" : 0.000023036386339653812, \\n \\\"learning_rate_manifold\\\" : 0.0, \\n \\\"learning_rate_scale\\\" : 0.0, \\n \\\"manifold\\\" : \\\"euclidean\\\", \\n \\\"omega\\\" : NumberInt(0)\\n }\\n\\n \\\"config\\\" : {\\n \\\"h0\\\" : 0.015625, \\n \\\"learning_rate_euclidean\\\" : 0.00006560909241433484, \\n \\\"learning_rate_manifold\\\" : 0.000020133287475822923, \\n \\\"learning_rate_scale\\\" : 0.00008723072785306676, \\n \\\"manifold\\\" : \\\"oblique\\\", \\n \\\"omega\\\" : 0.00026312093948723203, \\n \\\"weight_decay\\\" : NumberInt(0)\\n }\"}",
"{\"title\": \"RE reviewer 1\", \"comment\": \"We thank the reviewer for their careful reading and insightful remarks.\\n\\n1. We thank the reviewer for pointing out the typos. Upon reflection, we believe that the theorem would be best presented shortly in the main body, with a longer, more rigorous and easier to follow derivation in the appendix. This will both improve 'flow' of the paper and allow for a more rigorous exposition of our results.\\n\\n2. The major technical difference between our submission and Karakida's work [1] is that we do not make any assumptions on the distribution from which the weight matrices are sampled. \\n\\nMoreover, the intent behind our bound was different from that of Karakida et al. We set out to relate the conditioning of the input-output Jacobian to the Fisher information matrix curvature, while the authors of [1] strove to find an analytical expression for the maximum and mean eigenvalues of the Fisher information matrix under Gaussian weight assumptions. Our results allows us to recover Karakida's Frobenius norm bound (Eq. 16 in [1]) by using traces of random Gaussian matrices to bound the maximal singular of each layer.\\n\\nApart from this, we believe that the use of the block Gershgorin theorem might facilitate the analysis of optimization techniques such as K-FAC[2], where a block tri-diagonal approximation of the FIM is used.\\n\\n3. Indeed, the Riemannian geometry induced by the $W^2$ distance is highly relevant. We plan to explore it future work, and for the moment we will add reference to Wasserstein information geometry in the background section as alternatives to the Fisher-Rao metric.\\n\\n[1]R. Karakida, S. Akaho, and S. Amari, \\u201cUniversal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach,\\u201d in The 22nd International Conference on Artificial Intelligence and Statistics, 2019, pp. 1032\\u20131041.\\n[2]J. Martens and R. Grosse, \\u201cOptimizing Neural Networks with Kronecker-factored Approximate Curvature,\\u201d in International Conference on Machine Learning, 2015, pp. 2408\\u20132417.\"}",
"{\"title\": \"re4\", \"comment\": \"[1]J. Pennington, S. Schoenholz, and S. Ganguli, \\u201cResurrecting the sigmoid in deep learning through dynamical isometry: theory and practice,\\u201d in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 4785\\u20134795.\\n[2]S. Hayou, A. Doucet, and J. Rousseau, \\u201cOn the Impact of the Activation function on Deep Neural Networks Training,\\u201d in International Conference on Machine Learning, 2019, pp. 2672\\u20132680.\"}",
"{\"title\": \"re 3\", \"comment\": \"Average training cross-entropy over time\\n 0.0024 +--------+---------+--------+---------+--------+---------+--------+\\n * Stiefel h0 = 9e-4 ****** |\\n 0.0022 * +\\n * |\\n 0.002 *** +\\n **** |\\n 0.0018 +**** +\\n | ***** |\\n 0.0016 + ***** +\\n 0.0014 + ***** +\\n | ****** |\\n 0.0012 + ******** +\\n | ********** |\\n 0.001 + ************** * +\\n | * ***************** |\\n 0.0008 + **************************** * * * +\\n | ******************************** |\\n 0.0006 + ********************* +\\n | |\\n 0.0004 +--------+---------+--------+---------+--------+---------+--------+\\n 0 50 100 150 200 250 300 350\\n Time [minutes]\\n\\n 0.0024 +------------+------------+-------------+------------+------------+\\n * Oblique h0 = 9e-4 ****** |\\n 0.0022 * +\\n * |\\n 0.002 * +\\n ** |\\n 0.0018 *** +\\n |**** |\\n 0.0016 + **** +\\n 0.0014 + **** +\\n | ***** |\\n 0.0012 + ******* +\\n | ******* |\\n 0.001 + ****** +\\n | ******** |\\n 0.0008 + *********** +\\n | ******************* * |\\n 0.0006 + ************************************** +\\n | * ***************************** |\\n 0.0004 +------------+------------+-------------+--****-***********-------+\\n 0 50 100 150 200 250\\n Time [minutes]\"}",
"{\"title\": \"re 2\", \"comment\": \"We respectfully disagree in that the proof of the main theorem is missing - it is gradually built up throughout Lemmas 1-3 and the Proposition 1 and Assumption 2.\\nHowever, we do accede that the proof ought to be more accessible, and to that end we propose to give a short statement in the main body, and relegate most of the details\\nto the appendix, where it will be presented in more detail.\\n\\nBelow we present figures showing representative the wall-clock times for examples of each network type. From these example figures, one can see that orthogonally constrained (Stiefel) optimization takes about twice as long as the stochastic optimizer without any manifold constraints on the weight matrices. Nota bene, our implementation heavily relies on a multicore CPU architecture to offload the matrix factorization. From our experiments, the GPU-only implementation is approximately 4 times as slow as the unconstrained descent method. All the numerical experiments were performed on a Tesla P-100 with a Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz and 128GB RAM.\\n\\nRegarding comparisons to highly performant models, we do not consider the networks analyzed in our submission to be practical, but rather consider them as tools to aide the understanding of training very deep and wide networks. The time complexity of a gradient update for each Stiefel layer is O(m^3) where m is the number of neurons, for the Oblique layers that complexity is O(m^2); which hampers the application of manifold constrained optimization in very wide architectures that have recently been proposed.In our theoretical experiments, we focus on multi-layer perceptrons; and while they do not attain competitive performance with convolutional neural networks, they are far more amenable to theoretical analysis. This has been a dominant approach in works like the Neural Tangent Kernel[3], and the random matrix analyses [1,4,5]. We diligently recreated previously reported performance with similar architecture, viz [1] and [2], with a Bayesian hyperparameter search algorithm. \\n\\nThe claims relating the Neural Tangent kernel were intentionally put in the appendix. We do not claim to make novel contributions, but rather report empirical results that can be explained by using recently obtained bounds on the stability of linearizing wide neural networks around their initial parameters. We do however argue that the interpretation of those results is novel, since the focus in literature thus far has been on the stability of the NTK. Here we observe experimentally that networks with a \\\"nearly isotropic\\\" parameter space often train poorly, which goes against intuition from convex optimization. We connect this to the previously known results.\\n\\nThe training curves for both CIFAR 10 and SVHN are the left hand panels in Figure 1, and they illustrate that the manifold constrained networks \\\"reduce training loss faster\\\" but do not benefit from an equivalent increase in test accuracy. We apologize for the confusing labelling of the figures, we will clearly state that the left-hand panels are the per sample cross-entropies during training.\\n\\nLastly, we apologize for the imprecise phrasing -- we agree that the Neural Tangent Kernel only coincides with the FIM for Gaussian likelihoods and l^2 losses. We will make it clearly that this is a qualified equivalence between the two matrices.\"}",
"{\"title\": \"Re:Reviewer 2\", \"comment\": \"We would like to thank the reviewer for their careful reading of our submission, and their insightful comments.\\n\\nWe would like to stress our main theoretical contribution, which provides a bound on the largest eigenvalues of the Fisher information matrix (FIM).\\nThis result, generalizes known bound in two ways, it doesn't assume normally distributed weight matrices; it also shows that ensuring that the input-output Jacobian is well conditioned implies that the FIM has a relatively small maximal eigenvalue.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyses the training behavior of wide networks and argues orthogonal initialization helps the training. They suggest projections to the manifold of orthogonal weights during training and provide analysis. Their main result seems to be a bound on the eigen-values of the Fisher information matrix for wide networks (Theorem on pg 6). In their experiments they train Stiefel and Oblique networks as examples of manifold constrained networks and claim they converge faster than unconstrained networks.\", \"cons\": [\"Page 6, the main theorem of the paper, Theorem (bound on the fisher) doesn\\u2019t have a proof.\", \"Fig 1. What\\u2019s the overhead wall-clock time of manifold constraint?, on cifar10 the two manifold don\\u2019t have the same rate. Euclidean on cifar10 has higher test accuracy. Test accuracy after 200 epochs is below 90 and below 60.\", \"There are claims in the paper for providing explanations by making connections to Neural Tangent Kernel but it is mentioned only in the discussion section and they reiterate previously known results.\", \"Fig 3: is the training plot for cifar10 in this figure the one in figure1? Where is the training curves for svhn? Where should we see the rate of reduction in training loss for these methods?\", \"Section B.4: To show that FIM and NTK have the same spectrum you need \\\\nabla^2 L to be identity which is only true for L2 loss function. This does not apply to other loss functions such as cross-entropy.\"], \"after_rebuttal\": \"I raise my rating to weak accept. The writing has improved a lot and most of my concerns are addressed. It would be nice if authors could incorporate the timing plots in the appendix.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper formulates a connection between the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian in neural networks. This results derive the eigenvalues' bound to theoretically study the convergence of several networks. Here the upper bound further improves the upper bound of FIM derived in (Karakida et al., 2018).\\nThis is a very interesting and useful direction of applying information matrices to study the initialization of deep networks. \\n\\nI suggest the weak acceptance of the paper. After addressing the following remarks, I can adjust my reviews. \\n\\n1. There are some typos, such as see[?] in page 7, the main theorem on page 6 should be written mathematically with a remark. \\n\\n2. What is the major technical difference between this paper and Karakida et al., 2018? \\n\\n3. Here the model is given by conditional probability is defined by a neural network. \\nThe author may also be interested in implicit models, such as normalization flows and generative networks. \\nIn this case, the Wasserstein information matrix, (Hessian of Wasserstein-2 loss), may be very suitable to be studied.\", \"see\": \"\\\"A. Lin, W.Li, S.Osher, G. Montufar, Wasserstein proximal of GANs, 2018.\\\"\\n\\n\\\"W.Li, G. Montufar, Natural gradient via optimal transport, 2018.\\\"\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper analyzes the connection between the spectrum of the layer-to-layer or input-output Jacobian matrices and the spectrum of the Fisher information matrix / Neural Tangent Kernel. By bounding the maximum eigenvalue of the Fisher in terms of the maximum squared singular value of the input-output Jacobian, this paper provides a partial explanation for the successful initialization procedures obeying \\\"dynamical isometry\\\". By additionally investigating optimization on the orthogonal weight manifold, this paper sheds light on the important of maintaining spectral uniformity throughout training. These two analyses help fill in important gaps in the understanding of initialization, dynamical isometry, and the training of deep neural networks. For these reasons I recommend this paper for acceptance.\\n\\nThere are two aspects of the paper that could nevertheless use some clarification and improvement. First, unless I missed something, this paper does not provide any bounds on the condition number or the minimum eigenvalue of the Fisher or the NTK. It seems like the main arguments only depend on the maximum eigenvalues. Generally, I think the insights into the maximum eigenvalues are useful and important on their own, but perhaps some additional discussion up front clarifying which results were derived theoretically and which were observed empirically could be useful.\\n\\nSecond, it should be noted that the the networks trained in the experiments are likely in a regime that is well outside the NTK regime, in two important ways: the dataset is large compared to the width and the optimal learning rates may be large as well.\\n\\nOverall, I think this is a good paper that adds important insights into the study of initialization, local geometry, and their effects on training speed.\"}"
]
} |
Hkg0olStDr | Multi-Step Decentralized Domain Adaptation | [
"Akhil Mathur",
"Shaoduo Gan",
"Anton Isopoussu",
"Fahim Kawsar",
"Nadia Berthouze",
"Nicholas D. Lane"
] | Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios. In this paper, we highlight two significant bottlenecks for uDA, namely excessive centralization and poor support for distributed domain datasets. Our proposed framework, MDDA, is powered by a novel collaborator selection algorithm and an effective distributed adversarial training method, and allows for uDA methods to work in a decentralized and privacy-preserving way.
| [
"domain adaptation",
"decentralization"
] | Reject | https://openreview.net/pdf?id=Hkg0olStDr | https://openreview.net/forum?id=Hkg0olStDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5ic6a2vjtv",
"HJx6LU_nor",
"Hkg7iGi9iS",
"ByefrXPFiS",
"H1gdhxwFjr",
"BylUbevFiB",
"S1lSfA8Fsr",
"Bkl3MaIFsr",
"rylZ6nz59B",
"rkgK7eQYcH",
"SyehsyHxKr",
"ByxBx9sFOS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798751074,
1573844564856,
1573724827342,
1573643065804,
1573642416237,
1573642237994,
1573641740897,
1573641491952,
1572641977200,
1572577312751,
1570946980164,
1570515436835
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2520/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2520/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2520/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2520/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a solution to the decentralized privacy preserving domain adaptation problem. In other words, how to adapt to a target domain without explicit data access to other existing domains. In this scenario the authors propose MDDA which consists of both a collaborator selection algorithm based on minimal Wasserstein distance as well as a technique for adapting through sharing discriminator gradients across domains.\\n\\nThe reviewers has split scores for this work with two recommending weak accept and two recommending weak reject. However, both reviewers who recommended weak accept explicitly mentioned that their recommendation was borderline (an option not available for ICLR 2020). The main issues raised by the reviewers was lack of algorithmic novelty and lack of comparison to prior privacy preserving work. The authors agreed that their goal was not to introduce a new domain adaptation algorithm, but rather to propose a generic solution to extend existing algorithms to the case of privacy preserving and decentralized DA. The authors also provided extensive revisions in response to the reviewers comments. Though the reviewers were convinced on some points (like privacy preserving arguments), there still remained key outstanding issues that were significant enough to cause the reviewers not to update their recommendations. \\n\\nTherefore, this paper is not recommended for acceptance in its current form. We encourage the authors to build off the revisions completed during the rebuttal phase and any outstanding comments from the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes\", \"comment\": \"We thank the reviewers for finding our problem setting as sensible, well-motivated, and practical [R2, R3, R4]; our proposed solution as interesting [R1], novel [R2], technically solid [R3], theoretically sound and a valuable contribution to the DA literature [R4], and for considering our experimental analysis as extensive and detailed [R1]. We also appreciate the feedback and constructive critique from all the reviewers, which has helped us in significantly improving our paper.\", \"below_is_a_summary_of_the_major_changes_we_have_made_in_the_paper\": [\"***Based on the feedback from R1***\", \"We have substantially improved the description of our distributed training mechanism in Section 3.2.\", \"We have described the adversarial learning formulations used in our experiments in the Appendix.\", \"We conducted a larger literature survey and extended the related work section.\", \"We have added a new domain adaptation technique in Table 2 to demonstrate that our approach can work with the latest DA techniques. This specific baseline called CADA is from a work published at AAAI 2019 and the authors show that it outperforms many recent uDA techniques.\", \"***Based on the feedback from R2***\", \"In Section 4, we have improved the writing to ensure that the objective of various evaluations done in the paper is clear.\", \"We have added the limitations of our techniques while describing them, as well as summarized them in a separate section (Limitations and Conclusion).\", \"***Based on the feedback from R3***\", \"We have improved the description of our distributed training algorithm in Section 3.2 and also added a figure of our training architecture in the Appendix.\", \"We have added prior works on decentralized and distributed training in the Related Work and highlighted that our contribution lies in enabling adversarial domain adaptation to work in a distributed manner.\", \"***Based on the feedback from R4***\", \"We added a detailed discussion about the scope of privacy guarantees of our approach in Section 3.\", \"We have substantially enhanced the related work section by adding literature on decentralized training and privacy in ML.\", \"We have clarified the goals of our evaluation in Section 4.2 and clearly explained the metrics used to evaluate our approach. - We have also updated the training time in Table 3 and explained it in the text.\", \"We have acknowledged the limitations of our approach while introducing them, as well as briefly in Section 5.\"]}",
"{\"title\": \"On Decentralization and Distributed Training\", \"comment\": \"With regards to the term \\u2018decentralization\\u2019 used in the paper, we would like to clarify that \\u2018decentralized domain adaptation\\u2019 means that target domains can adapt from not only the source domain, but also other target domains, which eliminates the strict dependency on the source domain (which can be seen as a \\u2018central node\\u2019). This is different from the well-known \\u2018decentralized training\\u2019 of machine learning models where the idea is that the information from all distributed workers won\\u2019t be aggregated globally by a central node (like parameter server) or message passing process (like All-Reduce). We understand that the different definition of \\u2018decentralization\\u2019 used in the paper might confuse a reader, as such, we have added more clarity about it in the text.\\n\\nTo your second question about the metrics used to evaluate our distributed training strategy, we note that our evaluation is done through the lens of domain adaptation as opposed to a general-purpose evaluation of a decentralized training algorithm. In that aspect, our evaluation has two objectives:\\n\\na) After splitting the discriminator into two replicas (source and target discriminator) and applying lazy-synchronization, can we still achieve similar convergence rate and target domain accuracy as the case where the source and target discriminators reside on the same node (the convention, non-distributed DA case). Figure 2 and Table 3, in fact, confirm that even with a distributed adversarial training mechanism, we can reach similar levels of target domain accuracy as the non-distributed case.\\n\\nb) what is the total time taken for domain adaptation training by our lazy-synchronization approach which exchanges the gradients across nodes for every p batches, when compared to fully-synchronized distributed training (which exchange gradients for each batch) and the non-distributed case (which is conventional in DA literature)? The total training time is the sum of local computation time and communication time across nodes. In the non-distributed case, there is no communication at all during the training, so the training time is just the computation time. But it requires the data from all domains on one node. In the fully synchronized case, the amount of communication equals to the size of discriminator gradients multiplied by the number of training steps, whereas in the lazy-synchronized case, the communication amount is one *p*th of the amount of fully synchronized case, where p is the sync-up step. Based on the amount of data communicated across nodes, we estimate the communication time by assuming a bandwidth of 40Mbps [1] which is roughly the global average upload speed. The computation time is measured for each training step and aggregated to get the total computation time. By adding the total computation time with the total communication time, we obtain the total training time, which we report in Table 3 as one of metrics to evaluate our distributed method. \\n\\nWe acknowledge that this description was not adequately provided in the first draft, and based on your feedback, we have now incorporated it in the paper. \\n\\nWe would also like to highlight that our approach of estimating training time in a distributed setting is not ideal as it does not take into account the effect of real-world factors such as network setup latency, network congestion etc. However, it does provide a reasonable estimate to compare different domain adaptation training approaches, which can further be verified through a real-world deployment of domain adaptation algorithms on user devices in future work.\\n\\n[1] https://www.speedtest.net/global-index\"}",
"{\"title\": \"Related Work and Privacy\", \"comment\": \"Thanks for your comments and for finding our problem setting sensible, our approach as theoretically sound and a valuable contribution to the DA literature. Based on your feedback, we have revamped the related work section to include more literature on privacy and decentralized learning.\\n \\nWith regard to privacy in the distributed setting, we highlight that our approach (algorithm 1) only shares the gradients of the **domain discriminators** between nodes. The raw data from both domains as well as its corresponding gradients from the feature extractor are completely private and are NEVER shared between nodes. \\n\\nThis, in turn, reduces the possibility of reconstructing the training data as was demonstrated in earlier private-ML works such as [1], which used the gradients of the classifier to reconstruct the raw data. We will clarify in the paper that this is the level of privacy provided by MDDA in its current form.\\n\\nTo the best of our knowledge, no prior work has shown that discriminator gradients can leak the raw data training data. However, we accept and acknowledge your point and do not discount the possibility that privacy attacks could be developed even on discriminator gradients in the future. While a detailed analysis of privacy and security attacks and their mitigation is out of scope for this paper, we have added a discussion on the possibility of information leakage with our method and pointed to relevant work in the ML privacy literature on the attack defenses. \\n\\n[1] Aono, Yoshinori, et al. \\\"Privacy-preserving deep learning via additively homomorphic encryption.\\\" IEEE Transactions on Information Forensics and Security 13.5 (2017): 1333-1345.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for finding the problem novel and practical, and for considering our contribution as technically sound.\\n\\n### Cons (1) ### \\n\\nIndeed, as you mentioned, asynchronized accumulating and synchronized updating are variants of existing techniques in distributed training. However, to the best of our knowledge, these techniques have not been applied to adversarial domain adaptation, as such we adopted them to reduce the communication costs associated with sharing discriminator gradients between nodes. We have added this explanation in the paper in the related work section. \\n\\nMoreover, different from other works and specific to adversarial training, we only accumulate the gradients of parts of the model (source and target discriminators), while the remaining model (target encoder) is updated after every batch without any gradient accumulation (please refer to 3.2 for details). Our experimental evaluation shows a novel finding that this partial accumulation of gradients does not hurt the convergence rate as compared to conventional training of DA algorithms. \\n\\nFinally, we would like to highlight that the core contribution and novelty of our work lies in the problem formulation and our proposed collaborator selection algorithm. To the best of our knowledge, both these perspectives are missing from current DA literature and therefore, we believe that our work provides a significant contribution to the ML community. These contributions, in combination with our lazy-synchronized distributed training strategy, make our proposed framework useful for real-world deployments of domain adaptation algorithms.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thanks for the review, and for finding our paper well-motivated, novel for unsupervised domain adaptation, and well-supported by theoretical analysis. Here are our responses with respect to your questions:\\n\\n### Question 1 ###\\n\\nWe would like to humbly submit that our results indeed demonstrate the superiority of our technique. As you have correctly identified in your review, our paper proposes two techniques: a) *collaborator selection*, i.e., which existing domain should be picked as a collaborator for a new target domain and b) *distributed domain adaptation*, i.e., how to conduct adaptation between two distributed computing nodes without sharing domain data. \\n\\nFor the evaluation of the first technique, we show in Table 1 that in majority of the cases, our collaborator-selection algorithm (MDDA) outperforms conventional strategies used in existing DA works such as always selecting the labeled-source domain as collaborator, selecting multiple collaborators etc. \\n\\nFor example, in the case of RMNIST (O1), MDDA outperforms the top baseline by as much as *33%* in target domain accuracy. There are indeed some cases where our performance is lower than the best performing baseline, but not by much (e.g., 0.84% in Mic2Mic (O2)). On average, it can be easily seen that MDDA significantly outperforms other strategies. Further, in Table 2, we extend our analysis to demonstrate that our collaborator selection algorithm can work in conjunction with three different DA techniques (ADDA, GRL, Wasserstein DA), and also outperform the existing baselines in all cases. \\n\\nOnce we demonstrated the efficacy of our collaborator selection (our first algorithmic contribution), we proceeded to evaluate the performance of distributed domain adaptation (our second contribution). \\n\\nIt is important to clarify the goal of this evaluation. It is obvious that if we can perform domain adaptation in a distributed manner without requiring the domain datasets to be shared across nodes, it provides certain privacy benefits over non-distributed training. At the same time, there is a tradeoff between the adaptation accuracy and communication costs in distributed DA \\u2014 if we are willing to incur high communication costs, we can achieve the same accuracy as non-distributed training (by exchanging gradients of each batch). In order to save communication costs, we proposed the Lazy Synchronized DA algorithm which instead exchanges gradients after every p batches. \\n\\nAs such, the primary evaluation goal for our distributed DA algorithm is to reach as close as possible to the accuracy of a conventional non-distributed DA algorithm, with significantly lower communication costs. We have also added this explanation in a concise form in Section 4. \\n\\nAs can be seen in Fig 2 and Table 3, we are indeed able to reach significantly close to the target accuracy obtained with non-distributed DA approaches, even when we synchronize the gradients after every 4 batches. In other words, we incur only 25% of the communication costs of a fully-synchronized algorithm and yet are able to achieve similar levels of target domain accuracy, on top of the privacy benefits that are inherent in a distributed training technique. \\n\\nTherefore, in summary - we show that our collaborator selection algorithm provides significant accuracy gains over existing baselines. On the other hand, our Lazy-Synchronized training approach -- in addition to enhancing user privacy and saving significant communication costs -- can achieve similar levels of accuracy as conventional non-distributed DA algorithms. We believe that both these results are significant and novel contributions to the DA literature. \\n\\n### Question 2 ###\\n\\nOne limitation of our approach is the overhead associated with collaborator selection algorithm. While we showed that selecting an optimal collaborator can significantly boost the target domain accuracy, the selection of such collaborator requires computing the Wasserstein distance for different domain pairs which adds a one-time overhead before the adversarial training begins. Given the benefits associated with collaborator selection (as shown in Table 1 and 2), we believe it is worth paying a small price associated with running the selection algorithm. \\n\\nAnother weakness is that of indirect information leakage in distributed training. In conventional non-distributed DA, users are clearly aware that they are releasing their data over the network, hence giving up on their privacy. However, in distributed training, although there is an impression that the user data is private, prior works have shown the possibility of indirect information leakage [1] and privacy attacks. While a detailed analysis of privacy attacks on our method and their mitigation is beyond the scope of this paper, we have added it as an avenue for future research in the paper. \\n\\nWe hope our responses have answered your questions and convinced you about the significance of our experimental findings.\"}",
"{\"title\": \"Response to Reviewer 1 (part 2)\", \"comment\": \"### Clarity (3) ###\\nAlthough the adversarial training process is provided in Equation 1 and 2 in Section 3.1, we acknowledge that we did not explain it in detail in the text. \\n\\nBroadly, to train the domain discriminator, we follow the formulation given in equation 1. The key difference is that in our setting, since the source (X_s) and target (X_t) datasets reside in different nodes, we compute the gradients of the discriminators (D_s and D_t) separately and then update them based on their average gradients as discussed in Algorithm 1. \\n \\nThereafter, discriminators and the target extractor E_t play the adversarial game depending on the underlying domain adaptation algorithm. As shown in Table 2, we evaluated our approach in conjunction with three domain adaptation algorithms: \\n\\ni) ADDA [1] which implements adversarial training by reversing domain labels, \\nii) GRL [2] which uses a Gradient Reversal Layer between the feature extractor and the discriminator to enable adversarial learning, \\niii) Wasserstein DA [3] which uses the Wasserstein loss for adversarial training. \\n\\nWe have added the details of the adversarial formulation of each algorithm in Section A.3 in the Appendix. \\n\\n\\n### Cons (2) ###\\n\\nWe have polished the paper writing by adding more details and explanations of the design choices and algorithms. We have also revamped the related work section. \\n\\n### Cons (3) ###\\n\\nCertainly, we will be happy to add results for more DA algorithms in the paper. We however note that due to the novelty of our problem setting and solution, there are no baselines which do collaborator selection, against which we can directly compare our results. Therefore, in this setting, we chose baselines as \\u201cRandom selecting collaborator (no strategy)\\u201d, \\u201cAlways selecting the first source domain (majority of the prior DA algorithms)\\u201d and \\u201cSelecting multiple collaborators (a recent work [4])\\u201d. Besides, we have shown the efficacy of our technique with 3 DA [1,2,3] techniques in Table 2. \\n\\nWe can extend our analysis (in Table 2) to include an additional (latest) DA algorithm. Would that be sufficient from your perspective? \\n\\n[1] Tzeng et al. Adversarial discriminative domain adaptation. CVPR 2017.\\n[2] Ganin et al. Domain-adversarial training of neural networks.JMLR, 2016.\\n[3] Shen J. et al. Wasserstein distance guided representation learning for domain adaptation. AAAI 2018\\n[4] Zhao et al. Adversarial Multi-Source Domain Adaptation. NeurIPS 2018.\"}",
"{\"title\": \"Response to Review 1 (part 1)\", \"comment\": \"Thanks for providing a very nice summary of our work and for your insightful comments. We are glad that you found our paper interesting and considered our experimental analysis as extensive and detailed. \\n \\n###Novelty###:\\n \\nWe agree with your point that our paper is not proposing a new domain adaptation algorithm to boost the accuracy of the model in the target domain. Instead, our contribution operates one layer above the adaptation algorithm and can be utilized with many existing domain adaptation techniques as we demonstrate in Table 2. \\n\\nWe would like to argue that our proposed contribution is novel \\u2013 both from a problem formulation and a solution perspective. To the best of our knowledge, no prior domain adaptation paper has looked at the problems of domain selection with distributed domain datasets, which, as we highlight in the paper, are of practical significance but have been overlooked. Our proposed solution consists of the Wasserstein-distance collaborator selection algorithm to find the best possible collaborator for adaptation, and the Lazy-Synchronized DA algorithm to reduce the communication between nodes to merely the gradients of the discriminator (summarized in Algorithm 3). The former algorithm is particularly noteworthy because the question of \\u2018which domain should I select to adapt from\\u2019 is no less important than \\u2018how to adapt between two selected domains (which has been mainly studied before)\\u2019 in multiple domains adaptation. Overall, we believe our submission provides a brand-new perspective and novel contribution to the domain adaptation literature.\\n \\n#### Clarity (1) ####\\n\\nPlease correct us if we misunderstood your question -- our interpretation is that you are asking how to train D_s and D_t in a distributed way, when we are not sharing the features extracted from source and target domains across nodes. Indeed, this is a valid question and touches the core of our contribution on distributed training of the discriminators. \\n\\nTo answer this question, let us first have a brief look of how the Discriminator D gets trained in the non-distributed case (Equation 1), where the data from source domain and target domain are fed into their extractors respectively, then the input of D is the concatenation of the output of source extractor (E_s) and target extractor (E_t). \\n\\nBy contrast, as described in Section 3.2 and Algorithm 1, in the distributed case, D is split across nodes as D_s and D_t, and the outputs of E_s and E_t are fed to the respective discriminators separately. As you rightly questioned, it is not possible to update a discriminator without seeing the features from the other domain -- as such, our idea is to \\n\\ni) compute local gradients for D_s and D_t on their respective nodes (Algo 1, line 4), \\nii) exchange and average the gradients during the sync-up step (Algo 1, line 10), \\niii) update D_s and D_t with the averaged gradients (Algo 1, line 11). \\n\\nIn effect, this guarantees the following: \\n\\na) We are able to exchange knowledge between the two domains through sharing gradients of D_s and D_t, without requiring the raw data or extracted features to be shared across nodes. \\n\\nb) Weights of D_s and D_t always remain identical as they get updated with the same averaged gradients. As such, both the discriminators are always in sync with each other. This means that we are able to keep domain datasets private, and yet perform the adversarial training as given in Equation 1 and 2.\\n\\nWe have added this clarification in the paper in Section 3.2 and also added a figure in the Appendix as a visual aid to explain our distributed training mechanism. \\n\\n#### Clarity (2) ####\\n\\nWe follow the convention used in past DA papers (e.g. [1]) wherein the target classifier C_t is initialized with C_s (C_t <-- C_s) and is not updated in the training process. The intuition is that if the feature space of the target domain can be successfully aligned with the source domain through adversarial training, i.e., the outputs of E_s and E_t become close enough, then C_s can directly be used in the target domain without any adaptation. Recall that the classifier takes the outputs of the feature extractor as inputs, therefore if feature extractor outputs become similar across the two domains, then the same classifier can be shared by the two domains.\\n\\nAs such, during adversarial training, we only adapt the feature extractor E_t with the goal of aligning the target feature space with the source. In practice, this also means that the classifier C_s or C_t is very simple (e.g., it could be just a softmax layer or one fully-connected layer) and E_t does the bulk of the work for the classification task. \\n\\nWe have added this clarification to the paper in Section 3.2.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"I read the authors response. I am satisfied with the explanations on the privacy party. However, decentralized training part is still unsatisfactory since the empirical evaluation is not really decentralized. Back of the envelope calculations are at best correlated to the actual times spent by each node. Hence, the numbers in Table 3 are not physical numbers, rather result of an idealized network. Moreover, the x-axis of Figure 2 being training step is still not acceptable. Decentralized and centralized methods should be compared in terms of time which is the only fair metric. I stick to my original decision.\\n\\n-----\\n\\nThe manuscript is proposing a method for domain adaptation in a private and distributed setting where there are multiple target domains and they are added in a sequential manner. The proposed method considers only the domain adaptation methods in which the source model training and the target model training are done separately. In this setting, existing adapted models can be used as a source domain since a trained model suffices for adaptation. One major contribution of the paper is proposing a straightforward but successful method to choose which domain to adapt from. The main algorithmic tool is estimating Wasserstein distance and choosing the closest domain. The second contribution is distributed training setting for privacy and decentralization.\\n\\nChoosing which model to adapt from is an interesting contribution. The proposed setting is definitely sensible and the proposed method is theoretically sound. Hence, I consider this as a valuable contribution to the domain adaptation literature. Moreover, results suggest that it also results in significant performance improvement.\\n\\nPrivacy and decentralized learning part has major issues. First of all, the privacy and learning private models is a sub-field of machine learning with a large literature. Authors do not discuss any of these existing work. Second of all, authors do not specify the definition of privacy they are using. Only guarantee the algorithm provides is not passing data around. However, this is clearly not enough. Passing gradients might result in sharing sensitive data. The actual data can be reconstructed (upto some accuracy) using the gradients passed between nodes. Therefore, either a justification or a privacy guarantee result is needed. Both of these are major issues which need to be fixed.\\n\\nDecentralized learning is also an important problem which have been studied significantly. Related work section is missing majority of recent and existing work on distributed learning and federated learning. Moreover, empirical study is very counter intuitive. Results are given in terms of accuracy vs number of training steps. The important metrics are amount of massages passed and the total time of the distributed training. Many distributed algorithms trade off having less accurate gradients (consequently having higher number of gradient updates) with less message passing. Hence, I am not sure how to understand the distributed domain adaptation experiments. I am not even sure what the time in Table 3 actually means since it is clearly not even experimented in a distributed setting.\\n\\nIn summary, the submission is addressing an important problem. Moreover, the contribution on collaborator selection is interesting and seems to be working well. However, the private and decentralized learning parts are rather incomplete from related work and experiment sense. Moreover, I am also not sure can we call this method private or not.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper focuses on the problem of unsupervised domain adaptation in practical machine learning systems. To address the problems in current unsupervised domain adaptation methods, the authors propose to a novel multi-step framework which works in a decentralized and distributed manner. Experimental results show the effectiveness of the proposed approach.\\n\\nThis paper is well-motivated and the proposed method is novel for unsupervised domain adaptation. The paper is well-supported by theoretical analysis, however, the improvements are not that significant on some experimental results. For the above reasons, I tend to accept this paper but wouldn't mind rejecting it.\", \"questions\": \"1. The experiments do not really show the superiority of the proposed method compared to the common centralized approaches as they have similar performances on both collaborator selection and distributed domain adaptation. Can you convince the readers more with some other experiments?\\n2. What is the weakness of such decentralization models?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"###Summary###\\n\\nThis paper tackles unsupervised domain adaptation in a decentralized setting. The high-level observation is that the conventional unsupervised domain adaptation has two bottlenecks, namely excessive centralization and poor support for distributed domain datasets. The paper proposes Multi-Step Decentralized Domain Adaptation (MDDA) to transfer the knowledge learned from the source domain to the target domain without sharing the data.\", \"the_paper_also_explores_explore_a_proposition\": \"in addition to adapting from the labeled source, can uDA leverage the knowledge from other target domains, which themselves may have undergone domain adaptation in the past.\\n\\nThe proposed MMDA method contains a feature extractor (E), a domain discriminator (D) and task classifier (C) for each domain. The target domain components are initialized with the respective source components. The source domain discriminator D_s target domain discriminator D_t are synchronized by exchanging and averaging the gradients. The paper also proposes Lazy Synchronization to reduce the communication cost of the algorithm.\\n\\nThe paper also proposes Wasserstein distance guided collaborator selection schema to perform the domain adaptation task.\", \"the_paper_performs_experiments_on_five_image_and_audio_datasets\": \"Rotated MNIST, Digits, and Office-Caltech, DomainNet and Mic2Mic.\\n\\nThe baselines used in this paper include \\\"Labeled Source\\\", \\\"Random Collaborator\\\", and \\\"Multi-Collaborator\\\". The experimental results demonstrate that the proposed method can outperform the baselines on some of the experimental settings. The paper also provides a detailed analysis of the model and experimental results. \\n\\n### Novelty ###\\n\\nThis paper does not propose a new domain adaptation algorithm. However, the paper introduces some interesting tricks to solve the MMDA task such as the lazy synchronization between the source domain discriminator and the target domain discriminator. \\n\\n###Clarity###\", \"several_critical_explanations_are_missing_from_the_paper\": \"1) When training the source domain discriminator D_s and target domain discriminator D_t, if the features between the source domain and target domain cannot be shared with each other, how to train the D_s and D_t. For example, the D_s cannot get access to the features from the target domain, how to train D_s? \\n2) How is the target classifier C_t updated when there are no labels for the target domain?\\n3) As far as I understand, the domain discriminator is this paper is trained adversarially. The detailed adversarial training step is unclear. \\n\\n###Pros###\\n\\n1) The paper proposes an interesting transfer learning schema where the data between the source and target domain can not be shared with each other to protect the data-privacy.\\n\\n2) The paper provides extensive experiments on multiple standard domain adaptation benchmarks, especially the most recent dataset such as the DomainNet. \\n\\n3) The paper provides detail empirical analysis to demonstrate the effectiveness of the proposed methods. \\n\\n###Cons###\\n\\n1) The most critical issue of this paper is that some explanations are missing, e.g. how are D_s, D_t, C_t trained? Refer to the #Clarity.\\n\\n2) The presentation and writing of this paper need polish. The author should do more relative surveys to motivate the authors. One critical relevant reference of this paper is:\\n\\\"Secure Federated Transfer Learning\\\", Yang Liu et al\", \"https\": \"//arxiv.org/pdf/1812.03337.pdf\\n\\n3) The baselines used in this paper is also trivial. It is desirable to compare the proposed method with state-of-the-art domain adaptation methods.\\n\\nBased on the summary, cons, and pros, the current rating I am giving now is \\\"reject\\\". I would like to discuss the final rating with other reviewers, ACs.\\nTo improve the rating, the author should explain the questions I proposed in the #Clarity\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper focuses on the problem of domain adaptation among multiple domains when some domains are not available on the same machine. The paper builds a decentralized algorithm based on previous domain adaptation methods.\", \"pros\": \"1. The problem is novel and practical. Previous domain adaptation assumes that source and target domains are available but it can happen when the source and target domains have connection issues.\\n2. The method exploits asynchronizing accumulating and synchronizing update, which reduces the cost of communication between domains.\\n3. The paper proposes to use Wasserstein distance to select the optimal domain as the source domain for the target domain. \\n4. The experimental results show that the proposed method outperforms baselines.\", \"cons\": \"1. The asynchronizing accumulating and synchronizing update is not novel. It has been used in other communities such as reinforcement learning.\\n\\nOverall, the paper is good and it is technically sound. The contribution is not significant to the community but providing a new perspective for domain adaptation. I vote for weak accept.\\n\\nThank Reviewer1 for reminding. I think the paper still has some novelty and the comments address my concerns. I do not change my score. Also, I'm not unhappy if the paper is rejected. It is more like a borderline paper.\"}"
]
} |
Hyx0slrFvH | Mixed Precision DNNs: All you need is a good parametrization | [
"Stefan Uhlich",
"Lukas Mauch",
"Fabien Cardinaux",
"Kazuki Yoshiyama",
"Javier Alonso Garcia",
"Stephen Tiedemann",
"Thomas Kemp",
"Akira Nakamura"
] | Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the network parameters and activations. In particular, mixed precision networks achieve better performance than networks with homogeneous bitwidth for the same size constraint. Since choosing the optimal bitwidths is not straight forward, training methods, which can learn them, are desirable. Differentiable quantization with straight-through gradients allows to learn the quantizer's parameters using gradient methods. We show that a suited parametrization of the quantizer is the key to achieve a stable training and a good final performance. Specifically, we propose to parametrize the quantizer with the step size and dynamic range. The bitwidth can then be inferred from them. Other parametrizations, which explicitly use the bitwidth, consistently perform worse. We confirm our findings with experiments on CIFAR-10 and ImageNet and we obtain mixed precision DNNs with learned quantization parameters, achieving state-of-the-art performance. | [
"Deep Neural Network Compression",
"Quantization",
"Straight through gradients"
] | Accept (Poster) | https://openreview.net/pdf?id=Hyx0slrFvH | https://openreview.net/forum?id=Hyx0slrFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"J7Zupotlc--",
"CrvAxAeJz",
"Hkg9XIQnsB",
"r1ePi95Bor",
"Syee59qBjH",
"HJebt9uEoS",
"ByxD5fONsr",
"BJxSefdmoH",
"BJeWPJ87oB",
"SkltZl2zsH",
"rJxa1xZHcr",
"rJehYBRCFH",
"ryl5_mkRKr",
"Skx62lU2Fr",
"SklDkN4otS"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1589994586909,
1576798751044,
1573824033791,
1573395103336,
1573395079688,
1573321336691,
1573319310612,
1573253612581,
1573244761257,
1573203969242,
1572306917015,
1571902851906,
1571840881997,
1571737780901,
1571664862611
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"~Fabian_Timm1"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2519/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2519/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2519/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2519/Authors"
],
[
"~Alessandro_Capotondi1"
]
],
"structured_content_str": [
"{\"title\": \"Release of NNabla source code\", \"comment\": \"Please find our NNabla source code here: https://github.com/sony/ai-research-code/tree/master/mixed-precision-dnns\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The reviewers uniformly vote to accept this paper. Please take comments into account when revising for the camera ready. I was also very impressed by the authors' responsiveness to reviewer comments, putting in additional work after submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Manuscript update\", \"comment\": \"We would like to thank all reviewers and those who participated in the discussion for their thoughtful and helpful comments which helped us to improve our manuscript. In particular, we added the following revisions to the paper:\\n\\n1) We added a comment why we start from $b=4$bit in Section 2.3.\\n\\n2) We added a more detailed description of the experimental setup for the CIFAR-10 experiments in section 2.3 (including training time).\\n\\n3) We added a paragraph to Section 3, discussing how to choose the penalty weights $\\\\lambda_j$ when training networks with memory constraints.\\n\\n4) We added a more detailed description of the experimental setup for the ImageNet experiments in Section 4.\\n\\n5) We added that ADAM is used as an optimizer for the toy example in Section 2.3.\\n\\n6) In Section 2, we added a discussion of the scaling issue. Especially, we discuss why this scaling issue is a problem even if we use adaptive gradient methods like ADAM.\\n\\n7) We added a table to the appendix which compares the different quantization parametrizations when training a quantized ResNet-20 on CIFAR-10, using ADAM to give more empirical evidence. Please see Section A.4.\\n\\n8) We replaced $d$ with $\\\\frac{q_{max}}{2^{b-1}-1}$ in the discussion of the gradient norms in Section 2.1, for better readability.\"}",
"{\"title\": \"authors response (part 2 of 2)\", \"comment\": [\"Table 1 gives the accuracy of the networks if we start from random initialization, using no hardware constraints. We have chosen this experiment, because only the parametrization of the quantizer has an influence on the results and not the formulation of the hardware constraint. It shows, that quantized DNNs can be trained from scratch, using STE gradients and that the parametrization U3 and P3 are optimal. In comparison, table 3 gives the accuracy when training with memory constraints, initializing the network weights from a pre-trained float32 network. We have chosen this experiment to show, that we can learn quantized DNNs which satisfy a given memory constraint, using the penalty method. Furthermore, it shows, that heterogenously quantized DNNs with a learned bitwidth have a performance superior to quantized DNNs with a homogenous and fixed bitwidth. So, both experiments use completely different setups and, hence, do not show the same results.\", \"Of course, we can include the training time in the final version of our paper. Thank you for pointing it out.\", \"We started from $b=4$ bit in all of our experiments, because in practice, for $b>4$ bit it does not matter that much how a DNN is quantized. Even very simple offline algorithms (min/max quantization) will result in quantized DNNs with good performance.\", \"Of course, we will also try to resolve your minor aspects, addressing the formatting of the paper, when preparing our final version of the paper. However, due to the page limit, it might be hard to increase the size of all of the figures and tables.\"]}",
"{\"title\": \"authors response (part 1 of 2)\", \"comment\": \"We are glad that our paper is of interest to you, and that you enjoyed reading it. Thank you for your very detailed and well structured list of comments.\\n\\nWe noticed, that most of your comments concern the comparison of our approach to the related literature. As you said, many papers in the field of quantization have been published recently. Because of the length constraint, it is impossible to provide a full literature review of the DNN quantization literature in this paper. We therefore limited our discussion of the related work to: 1) Seminal papers such as Deep Compression (Han et al., 2015), 2) Papers which are thematically close to ours, i.e., which discuss how to learn the quantization parameters; eg. Wang et al., 2018 and TQT (Jain et al., 2019), 3) Papers discussing other quantization methods that deliver state-of-the-art results for the same networks and tasks as we do. Thank you for pointing at your publication to ESANN 2019. We did not know about this paper and we will, of course, happily read it and add it as a reference if it is relevant to our paper.\", \"concerning_your_list_of_questions\": [\"The TQT method learns $q_{max}$, while the bitwidth $b$ is set manually. Therefore, we see it as a first step toward our approach which can learn all quantization parameters. The TQT results in Table 3 are obtained from our own implementation. We treated TQT as a special case, where we calculate the gradient with respect to $q_{max}$, but not with respect to $b$. This makes it directly comparable to our results.\", \"We have chosen ResNet18 and MobileNetV2 because they are classical baseline methods in the quantization literature. We think that it makes most sense to compare the the effect of quantization, using networks which already have a small memory footprint (like ResNet18 or MobileNetV2). Compared to those models, the VGG type networks are heavily over parametrized and have a low efficiency, meaning that they obtain worse accuracies while having a larger computational complexity. We strongly believe that it would be very beneficial if the community working on network quantization would use the same set of baseline models as its foundation, performing the comparisons in the same way. Unfortunatly this is not the case as of today each paper uses different tricks (e.g. quantizing only a subset of layers, etc.), what makes the comparison of the methods rather difficult. We would support any action toward creating a common benchmark for network quantization.\", \"You are correct, that in our experiments on the ImageNet dataset, the best quantized networks reached a slightly better accuracy as the full precision baseline model. There are two reasons why this is possible: 1) We used the baseline model as an initialization when fine tuning with memory constraints, meaning that the quantized network actually has been trained considerably longer. 2) The memory constraints might act as a regularizer, which favors simple models with good generalization properties.\", \"All baseline results are obtained, using our own implementation and trained from scratch. Our setup is as follows: SGD with momentum 0.9, learning rate 0.1; Fixed learning rate schedule with 3 drops by factor 0.1; input standardization; random flip with $p=0.5$; random crop.\", \"You pointed out correctly, that in Eq. 1 we have 3 parameters, while in Fig. 1 $\\\\theta$ only takes two values. Actually, this is related to the core idea of this paper. The reason why we can choose between three different parametrizations is, because 2 out of 3 parameters in $\\\\theta$ are always dependent, meaning that we have ${3 \\\\choose 2} = 3$ parametrizations with 2 independent parameters.\"]}",
"{\"title\": \"answer to reviewer 3\", \"comment\": \"Thank you for this interesting question about the differences between the uniform and the power-of-two quantization experiments.\\n\\nWhen comparing the gradients of the uniform quantizers in Eq. (3a-3c) to gradients of the power-of-two quantizers in Eq. (22-25), we can notice that the differences between the gradient scales is much larger for the power-of-two quantization. This means, that the scaling problem is more severe. You are correct, that this can lead to more ill-conditioned Hessians and, hence, could explain the bigger performance gaps between our three parametrizations in case of the power-of-two quantization. We think we can confirm this with some additional simulations.\\n\\nYour last question about the formulation of the constraint optimization problem is heavily related to the comments of reviewer 2. Therefore, it might be best to read our response to reviewer 2 at this point.\"}",
"{\"title\": \"answer to reviewer 2\", \"comment\": \"Thank you for your interesting comments concerning the formulation of the constraint\\noptimization problem. We are delighted to see that you found our paper interesting and well written. We agree with you, that our approach to include the constraints in the loss function is straight-forward. However, this is not the main contribution of the paper and we would welcome any further work looking at better ways to do this.\\n\\nYou are absolutely right, that optimizing our proposed cost function does not necessarily guarantee to actually achieve the desired constraint. Note that using a larger multiplier can help with this problem in practice, however the constraint term should not dominate the cost function. As described in section 4, paragraph 2, we observed this issue when migrating from the CIFAR10 to the ImageNet experiments. As mentioned, it is important to choose the multipliers such that both the cost term (categorical cross-entropy in our case) and the penalty terms have comparable magnitudes after random initialization of the network. However, in practice, we observed that the performances are not sensitive to the choice of $\\\\lambda_j$ as long as it is roughly scaled with the network size.\\n\\nFor all experiments, we also back-propagate the error through g.\"}",
"{\"title\": \"Results with ADAM optimizer\", \"comment\": \"Reviewer#1 commented that solvers such as ADAM which scale the gradient for different parameters could eleviate the gradient scaling problem for parametrizations U1 and U2. Therefore, we have run additional experiments on CIFAR10 to collect more empirical evidence. We ran the training on CIFAR10 with ADAM optimizer (initial learning rate = 0.001) . Otherwise, we use the same experimental setup as we used for the SGD results presented in the paper. We show the results for two cases: (1) Parameters quantization only and (2) Parameters and activation quantization.\", \"for_parameters_quantization_only\": \"| SGD Momentum| ADAM |\\n==========================\\nU1 | 11.74% | 7.61% |\\nU2 | 7.44% | 7.85% |\\nU3 | 7.32% | 7.36% |\\n\\n\\nFor Parameters and Activations Quantization\\n\\n | SGD Momentum| ADAM |\\n===========================\\nU1 | 15.35% | 7.54% |\\nU2 | 7.74% | 7.79% |\\nU3 | 7.40% | 7.40% |\\n\\nThese results are interesting because, to some extend, ADAM helps to cope with the 'bad' parametrizations (in particular for the parametrization U1). Again, the parametrization U3 consistantly outperforms U1 and U2 for both weight and weight/activation quantization. \\n\\nWe believe that this experiment further strengthens our main claim that parametrization U3 is superior to the other parametrizations. The main message is, that even if you use ADAM or momentum, the parametrization still matters a lot. We would like to thanks again the reviewer for this particularly interesting comment.\"}",
"{\"title\": \"Results require some clarification\", \"comment\": \"Thank you very much for your article. We are also working in the field of network quantisation and we really enjoyed reading.\\nThe article is well structured and easy to follow. However, we kindly ask the authors clarify our major aspects and we really recommend to take care of our minor aspects.\\n\\n-----------------------------------------------------------------------\", \"major_aspects\": [\"Table 3: Where do you get the results of approach TQT for CIFAR-10? In the original TQT paper these values have not been mentioned. Did you recomputed them yourself? If so, which parameters did you choose?\", \"Table 3: Many other state-of-the-art approaches [1,2,3] also provide results for CIFAR-10, e.g. using VGG7 or DenseNet. So, why do you only compare with TQT?\", \"Table 4: Please, explain why the baseline error rate of 29.82% is larger than your best error rate of 29.41%. Same holds for ResNet-18, here your approach also improves the error rate of the pure floating point model (29.34% vs. 29.72%).\", \"Table 3+4: How do you obtain the baseline performance? Training from scratch or from other papers? If you did training from scratch, please provide the paramters for better comparison and reproducability.\", \"Table 4: The authors of the TQT paper evaluated their approach for 12 different networks on ImageNet. For comparison you used two of their performance values (MobileNet V2 and ResNet-18), why have you chosen those two networks?\", \"In equation 1 $\\\\theta$ is defined by 3 values, in Fig. 1 $\\\\theta$ takes only two values, why?\", \"Table 1 and 3: you provide results for ResNet-20 on CIFAR-10 in both tables, but where is difference? Should the error rates not be identical?\", \"Please, update your references. Especially over the past three years many papers in the field of quantisation have been published, e.g. [1,2,3]\", \"You intensively compare the error rates and the network size. Of course, the main focus is on quantisation and fast\", \"inference on target hardware. But please also mention the training time in terms of epochs.\", \"[1] Xiao Dong Chen, Xiaolin Hu, Hucheng Zhou, and Ningyi Xu. Fxpnet: Training a deep convolutionalneural network in fixed-point representation. IJCNN 2017.\", \"[2] Fengfu Li and Bin Liu. Ternary weight networks.CoRR, abs/1605.04711, 2016.\", \"[3] Lukas Enderich, Fabian Timm, Lars Rosenbaum, and Wolfram Burgard. Learning multimodalfixed-point weights using gradient descent. ESANN 2019.\", \"-----------------------------------------------------------------------\", \"Minor aspects (that would improve readability):\", \"Separate formulas and many mathematical definitions in the corresponding latex environment (of course this needs space). But using them within the text makes reading very hard, e.g. beginning of section 2.3\", \"Increase figures in size, e.g. legend of Fig. 3 is almost as large as the plots itself, same holds for Fig. 4\", \"Increase font size in Table 1, 2, and 3 - it is very hard to read\", \"In Sec. 2.3 2): you define $d_l$ with $W_l$, we guess $W_l$ is the number of weights in layer $l$; or is it the total number of weights?\", \"In Sec. 2.3 2): why do start with b=4bit? What happens if you use larger values for b?\", \"We are really looking forward to your comments.\"]}",
"{\"title\": \"Response to Review#1\", \"comment\": \"Thank you very much for your time and comments \\u2013 please find below our point-to-point reply to each of them.\\n\\n\\\"[...] in section 2.1. It is not clear how the range for U2 is obtained.\\\"\\n\\nTo obtain the range of U2, we can have a look at eq. (3b). First, the derivative with respect to $q_{max}$ is bounded as the magnitude of the gradient is always smaller than one. For the derivative with respect to $b$, we know that $Q(x;\\\\theta) \\u2013 x$ can be bounded by $d/2$. Furthermore, the ratio $2^{b-1} / (2^{b-1} \\u2013 1)$ will be largest for $b = 2$ and, hence, the derivative is in $[-d*log(2), d*log(2)]$ where $d$ for U2 depends on $b$ and $q_{max}$ via $d = q_{max} / (2^{b-1} \\u2013 1)$. We noticed that it is confusing that we use `$d$ in Eq. (3b) and will replace it by $q_{max} / (2^{b-1} \\u2013 1)$ in the final version of the paper.\\n\\n\\\"Given d an integer, the gradient wrt b is also bounded. In this case, why is case U3 better than U2? In Table 1, it is shown that U2 also has good performance for uniform quantization.\\\"\\n\\nPlease note that the step size $d$ does not need to be an integer but $d \\\\in R^+$ (and it should be a pow2 for an efficient implementation). As we have shown above, the gradient can grow arbitrarily large as $d = d(b, q_{max})$ is not bounded.\\nHowever, since a large $d$ is mostly not desired, you are right that exploding gradients are very unlikely in case of parametrization U2. U3 is superior to U2 mainly because the gradients with respect to $d$ and $q_{max}$ are decoupled for parametrization U3. This means that the derivative wrt $q_{max}$ is zero if the derivative wrt $d$ is non-zero and vice versa (as you can observe in eq. (3c)). Such a decoupling of the gradients is desirable for gradient-based optimization as we always optimize along conjugate directions, which is very effective.\\n\\n\\\"Indeed, the gradient of any of the three parameters [...] can be derived by using chain rule given the gradients of the other two. It is not clear to me why some of them can be unbounded while others do not.\\\"\\n\\nAs you you pointed out correctly, the derivatives of the three different parametrizations are related by the chain rule. The fact that some parametrizations have unbounded gradients is caused by the non-linear relationship of the parameters, i.e., $q_{max} = (2^{b-1}-1)d$. When converting the gradient of one parametrization to another, we will have to multiply with derivatives of this function. Note, that for example $\\\\partial/\\\\partial q_{max} b(d, q_{max}) = log(2) / ( q_{max} + d )$ might grow arbitrarily large for small $q_{max}$ and $d$.\\n\\n\\\"Adaptive learning rate methods like Adam should be able to help deal with the different scale of the gradients for three parameters. Can the authors compare the three parameterizations using Adam and see if similar empirical results can still be observed.\\\"\\n\\nWe also thought of this in our experiments, but did not include the results because of the page limit. In fact, our toy example in the appendix (e.g. page 12, Fig. 9) was run with the Adam optimizer. The performance difference is still considerable although an optimizer with adaptive learning rate was used. We will add this missing information to the appendix.\\nThe cause is mainly, that Adam needs the statistics of the gradients change smoothly over the parameter space to work well. If this is not the case, the estimated first and second moments of the gradients will be too noisy. \\nPlease note, that it also is not a simple scaling issue of the gradients, which can be solved by estimating the gradient magnitude and by normalizing it out. The gradient magnitude depends on the position in the quantization and weight parameter space, meaning that the gradient magnitude can explode for some parameter values (e.g. large $b$ for U1). \\n\\n\\\"At the end of Section 2.1, the authors said that \\\"similar considerations can be made for power-of-two quantization\\\". [...] Can the authors elaborate more on the difference?\\\"\\n\\nWe did not intend to say that both uniform and pow2 quantizations have equal performance if we choose the right parametrizations. What we meant with \\u201csimilar considerations\\u201d is, that for pow2 quantization, there are also three different parametrizations from which we can choose and that the parametrization that does not involve the bitwidth directly is better suited for optimization. Please note that the pow2 quantization scheme is much more restrictive as it constraints the weights to be pow2. This results in general in networks with worse performance compared to uniform quantization.\\n\\n\\\"Is the proposed differential quantization method used for both weight and activation? If so, how are the gradients w.r.t. the weights propagated through the quantized activations?\\\"\\n\\nOur method can be used to quantize both weights and activations. As we stated in Sec. 3 and 4, we used both activation and weight quantization in all of our experiments.\\n\\nWe hope that we have answered all your concerns, we are welcoming any further discussion.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose learning a quantizer for mixed precision DNNs. They do so using a suitable parameterization on the quantizer's step size and dynamic range using gradient descent, and where the quantizer's bitwidth are inferred from the former two rather than also learned jointly.\\n\\nAs a non-expert in the field, I found the paper well-written and interesting in their analysis of their proposed parameterization. They explain well how quantizers work, and the intuition and relationships of the parameters behind two popular types of quantizers: uniform and power-of-two. Equation (3) is especially explicit in understanding how the choice of 2 of the 3 parameters makes an impact on the choice of gradients. My understanding is that this is the core contribution.\\n\\nNovelty-wise, I don't have enough background to tell if this is much of a leap from related work that has already proposed learning certain parameters of quantizers (but different parameters, or not the exact 2 proposed by the authors). I do like the discussion of related quantizer literature noted in the introduction.\\n\\nI don't know if there is already previous work in the paper's follow-up section of learning quantized DNN under a constraint involving maximum total memory, total activation memory, and maximum activation memory. The solution of a Laplace multiplier seems fairly naive and hard to work in practice as it is not a hard constraint. As a naive question, how does the scale of these values compare to the original loss function? For example, if we think of the original loss function as a negative log-likelihood which computes bits/example, does it make sense to add a constraint penalty in kB as in the experiments, which is a completely different unit scale? Do you also backpropagate through the constraint function g?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The work studies differentiable quantization of deep neural networks with straight-through gradient (Bengio et. al., 2013). The authors find that a proper parametrization of the quantizer is critical to stable training and good quantization performance and demonstrated their findings to obtain mixed precision DNNs on two datasets, i.e., CIFAR-10 and Imagenet.\\n\\nThe paper is clearly written and easy to follow. The idea proposed is fairly straight-forward. Although the argument the authors used to support the finding is not very rigorous, the finding itself is still worth noting. \\n\\nOne of the arguments that the authors used to support the specific form of parametrization is that it leads to diagonal Hessian. From optimization perspective, what matters is the condition number, i.e., max/min of the eigenvalues of the Hessian. Could this explain the small difference between the three different parametrization forms with uniform quantization and the big difference for power-of-two quantization? \\n\\nThe penalty method used to address the memory constraints will not necessarily lead to solutions that satisfy the constraints. The authors noted that the algorithm is not sensitive to the choice of the penalty parameters. Have the authors tried to tackle problems of hard memory constraints?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the problem of training mixed-precision models.\\nSince quantization involves non-differentiable operations, this paper discusses how to use the straight-through estimator to estimate the gradients, and how different parameterizations of the quantized DNN affect the optimization process. The authors conclude that using the parameterization wrt the stepsize d and quantization range q_max has the best performance.\\n\\nIn the discussion for the three parameterization choices in section 2.1. \\nIt is not clear how the range for U2 is obtained. Given d an integer, the gradient wrt b is also bounded. In this case, why is case U3 better than U2? In Table 1, it is shown that U2 also has good performance for uniform quantization.\\n\\nIndeed, the gradient of any of the three parameters (stepsize, bitwidth and quantization range) can be derived by using chain rule given the gradients of the other two. It is not clear to me why some of them can be unbounded while others do not. In addition, It is not clear to me why having different gradient scales is a big problem. Adaptive learning rate methods like Adam should be able to help deal with the different scale of the gradients for three parameters. Can the authors compare the three parameterizations using Adam and see if similar empirical results can still be observed.\\n\\nAt the end of Section 2.1, the authors said that \\\"similar considerations can be made for power-of-two quantization\\\". However, from table 1, these three parameterizations indeed have quite different performances for uniform and power-of-two quantization. E.g., for uniform quantization, U2 and U3 perform significantly better than U1, while for power-of-two quantization, U1 and U3 perform significantly better than U2. Can the authors elaborate more on the difference?\\n\\nIs the proposed differential quantization method used for both weight and activation? If so, how are the gradients w.r.t. the weights propagated through the quantized activations?\\n\\n---------- post-rebuttal comment -------------\\nI thank the authors for their detailed response. It has solved most of my concerns and I accordingly raised my score.\\n---------------------------------------------------------\"}",
"{\"title\": \"Mixed Precision: Rule based vs Learned bitwidth\", \"comment\": \"Thank you for your interest in our work and for pointing at your paper, which also addresses the problem of finding the optimal bitwidth of per-tensor quantized DNNs. Your results for MobileNetV1 are interesting, as the networks reach good accuracy with a quite small memory footprint.\\n\\nWhile reading your paper and comparing it with our work, we see major methodological differences. You proposed an iterative offline algorithm to obtain the optimal bitwidth. To our understanding, the algorithm is based on a heuristic which reduces the bitwidth of those weight tensors that dominate the memory footprint of the network until it meets the memory constraint. Hence, the bitwidth selection can only depend on the network structure, but not on the data. In contrast, our paper proposes a training method for quantized DNNs, where the bitwidth is learned in parallel to the network parameters. In other words our method optimizes jointly for accuracy and size. Therefore, our learned bitwidths is network structure as well as data dependent.\"}",
"{\"comment\": \"Dear authors,\\n\\nIn our previous paper (https://arxiv.org/abs/1905.13082) we proved the effectiveness of another approach besides the ones you cited to select the quantization parameters.\\nIn that work, we present a novel end-to-end methodology for enabling the deployment of low-error deep networks on microcontrollers. To fit the memory and computational limitations of resource-constrained edge-devices, we exploit mixed low-bitwidth compression, featuring 8, 4 or 2-bit uniform quantization, and we model the inference graph with integer-only operations. Our approach aims at determining the minimum bit precision of every activation and weight tensor are given the memory constraints of a device.\\nThis is achieved through a rule-based iterative procedure, which cuts the number of bits of the most memory-demanding layers, aiming at meeting the memory constraints. After a quantization-aware retraining step, the\\nfake-quantized graph is converted into an inference integer-only model by inserting the Integer Channel-Normalization (ICN) layers, which introduce a negligible loss as demonstrated on INT4 MobilenetV1 models. We report the latency-accuracy evaluation of mixed-precision MobilenetV1 family networks on an STM32H7 microcontroller. Our experimental results demonstrate an end-to-end deployment of an integer-only Mobilenet network with Top1 accuracy of 68% on a device with only 2MB of FLASH memory and 512kB of RAM, improving by 8% the Top1 accuracy with respect to previously published 8-bit implementations for microcontrollers.\", \"title\": \"Mixed precision quantization through a rule-based approach\"}"
]
} |
SJxpsxrYPS | PROGRESSIVE LEARNING AND DISENTANGLEMENT OF HIERARCHICAL REPRESENTATIONS | [
"Zhiyuan Li",
"Jaideep Vitthal Murkute",
"Prashnna Kumar Gyawali",
"Linwei Wang"
] | Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE). However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised. Motivated by the concept of “starting small”, we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions. The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction. We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark datasets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap. We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations. By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations. | [
"generative model",
"disentanglement",
"progressive learning",
"VAE"
] | Accept (Spotlight) | https://openreview.net/pdf?id=SJxpsxrYPS | https://openreview.net/forum?id=SJxpsxrYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"cQL0CNpLeP",
"BJxLfm2wsH",
"r1l9QfnwoB",
"H1gEzb3DoS",
"S1xo6JQzjS",
"rJg_917MsB",
"HkxsYDz-qB",
"Bkxu1H8y9B",
"ryx1b4oaYH",
"BJe7sjL9tB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798751006,
1573532430094,
1573532193831,
1573531916497,
1573167042637,
1573166991929,
1572050819266,
1571935456120,
1571824630973,
1571609499168
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2518/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2518/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2518/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2518/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2518/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2518/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2518/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2518/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2518/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a novel way to learn hierarchical disentangled latent representations by building on the previously published Variational Ladder AutoEncoder (VLAE) work. The proposed extension involves learning disentangled representations in a progressive manner, from the most abstract to the more detailed. While at first the reviewers expressed some concerns about the paper, in terms of its main focus (whether it was the disentanglement or the hierarchical aspect of the learnt representation), connections to past work, and experimental results, these concerns were fully alleviated during the discussion period. All of the reviewers now agree that this is a valuable contribution to the field and should be accepted to ICLR. Hence, I am happy to recommend this paper for acceptance as an oral.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Re: Review #3\", \"comment\": \"Thank you for your positive and constructive comments.\\n\\nAs suggested, we conducted ablation studies to investigate the performance of our implementation strategies, i.e. \\u201cfade-in\\u201d and pre-trained KL penalty. The primary purpose of these implementation strategies is to improve the stability of the training, so as to avoid problems such as gradient explosion when adding new layers. Therefore, we focused our study on the effect of these implementation strategies on successful training rates. The results below are summarized from a total of 15 experiments.\\n\\nNo strategies | fade-in only | pre-KL only | both\\n0.0 | 0.667 | 0.733 |0.867\\n\\nAs shown, both implementation strategies helped improve the training stability of progressive learning. We have added this result and discussion in Appendix C.\\n\\nAs to the varying weights for each ladder layer, we think it is an interesting approach to investigate. We conducted a preliminary experiment on 3DShapes dataset, in which we \\u201cfade-in\\u201d each layer from 0 to different maximum weights ([10,5,1] for [z3,z2,z1]) in parallel. No obvious improvement has been observed compared to vanilla VLAE in terms of both disentanglement (MIG = 0.41, MIG-sub = 0.55, when beta=10, which is the best beta for VLAE in our experiments) and hierarchical representation.\\n\\nWe also added multiple closer comparisons to VLAE in appendix B, including training and generating a new network on MNIST following the same process as described in Fig 5 of [1] and a detailed quantitative comparison of the mutual information between data and latent codes at different depths. These new results further demonstrate the advantages of the presented methods.\\n\\n[1] Learning Hierarchical Features from Generative Models, Zhao et al., ICML 2017\"}",
"{\"title\": \"Re: Review #2\", \"comment\": \"Thanks for reviewing our work and the constructive comments.\\n\\nTo further assess the information flow during progressive learning, as suggested, we conducted experiments on hierarchical settings with different combinations of the number of layers L and number of latent dimensions z_dim in each layer. Each experiment was repeated 3 times with random initializations, from which the mean and the standard deviation of mutual information I(x;z_l) were computed. Here we present the results on 3DShapes dataset.\\n\\nL=2, z_dim=3\\nProgressive step | I(x;z2) | I(x;z1) | total I(x;z)\\n 0 | 10.68\\u00b10.19 | | 10.68\\u00b10.19\\n 1 | 7.22\\u00b10.30 | 5.94\\u00b10.26 | 12.88\\u00b10.20\\n\\nL=3, z_dim=2\\nProgressive step | I(x;z3) | I(x;z2) | I(x;z1) | total I(x;z)\\n 0 | 10.16\\u00b10.13 | | | 10.16\\u00b10.13\\n 1 | 9.76\\u00b10.05 | 7.36\\u00b10.10 | | 13.00\\u00b10.02\\n 2 | 6.83\\u00b11.37 | 6.66\\u00b10.17 | 5.80\\u00b10.41 | 13.07\\u00b10.02\\n\\nL=4, z_dim=1\\nProgressive step | I(x;z4) | I(x;z3) | I(x;z2) | I(x;z1) | total I(x;z)\\n 0 | 4.89\\u00b10.03 | | | | 4.89\\u00b10.03\\n 1 | 4.77\\u00b10.04 | 3.55\\u00b10.04 | | | 8.14\\u00b10.09\\n 2 | 4.66\\u00b10.04 | 3.75\\u00b10.04 | 2.70\\u00b10.10 | | 10.67\\u00b10.09\\n 3 | 4.55\\u00b10.11 | 3.53\\u00b10.35 | 2.80\\u00b10.19 | 2.17\\u00b10.14 | 11.72\\u00b10.03\\n\\nAs shown, for all hierarchical settings, there is a clear descending order of the information amount in each layer, which aligns with the motivation of progressive learning. Besides, the information tends to flow from previous layers to new layers, suggesting a disentanglement of latent factors as new latent dimensions are added to the network, similar to what we presented (Fig 5) in the paper. We have added these results along with additional results on MNIST in Appendix D.\\n\\nLast but not least, we have modified our paper to better formalize the two statements as suggested.\"}",
"{\"title\": \"Re: Additional comments\", \"comment\": \"Thanks for the additional comments.\\n\\nFor the results from 3DShape in Figs 4 and 5, the dataset has in-total 6 generative factors while the VLAE/pro-VLAE being tested have 9 latent dimensions available. In this case, an ideal disentanglement should result in only 6 active latent-dimensions and 3 latent dimensions encoding nothing. This was achieved by the presented method but not VLAE, highlighting the improvement brought by the presented progressive learning method which is also quantitatively verified by the metrics included in Fig 4. In other words, the outcome of 3 \\u201cinactive\\u201d dimensions in Figs 4-5 actually is a desired outcome and demonstrated the advantage rather than disadvantage of the presented method. To further address the reviewer\\u2019s concern, we carried out additional experiments on 3DShapes with the presented pro-VLAE using only six latent codes (see appendix D), both in the form of two layers of three-dimensional latent codes and three layers of two-dimensional latent codes. In either case, the presented model obtained a similar amount of total information and no layer was empty.\\n\\nAs suggested, for results in Fig 4 and Fig 6, we added 1) the measure of mutual information for each layer as well as 2) the comparison to VLAE model in the Appendix B. \\n\\n3D shapes, L=3, each z_i has 3 dimensions\\n I(z3;x) | I(z2;x) | I(z1;x) | total I(z;x)\\nVLAE 4.41 | 4.69 | 5.01 | 12.75\\npro-VLAE 6.94 | 6.07 | 0.00 | 13.02\\n\\nMNIST, L=3, each z_i has 3 dimensions\\n I(z3;x) | I(z2;x) | I(z1;x) | total I(z;x)\\nVLAE 8.28 | 8.89 | 7.86 | 11.04\\npro-VLAE 9.83 | 8.24 | 6.28 | 10.93\\n\\nIn 3DShapes, as explained above, the latent codes in the shallowest layer of the pro-VLAE were not \\u201cactive\\u201d \\u2014 this is desired given that there are only six true factors in the dataset and they have been completely captured in the first two layers of the latent codes. In MNIST, I(z1;x) provides quantitative evidence that, in Fig 6, some style information is indeed encoded in that layer. It further confirms that \\u201cinactive\\u201d latent codes as observed in Figs 4-5 in 3DShape was not a general result but a specific outcome in the case when the true generative factors have been completely captured in the earlier layer. Overall, compared to VLAE, the presented method achieves a clearer descending order of allocation of information for each layer owing to the properties of progressive learning.\\n\\nThe quantitative results of mutual information presented in the main text of the paper (Fig 5) were mainly intended to track the progression of the progressive learning, which do not apply to VLAE. Therefore, we added the above results of mutual-information in section B of the Appendix as a closer comparison with VLAE.\"}",
"{\"title\": \"Re: Review1 (part1)\", \"comment\": \"We would like to thank the reviewer for the comments. Below we clarify the key confusions and questions raised.\\n\\nWe would like to clarify that the overall purpose of this paper, motivated by \\u201cstarting small\\u201d, is to progressively learn disentangled representations from high- to low-levels of abstractions. Similar to the hierarchical representations in the VLAE model [1], we define the representations at different levels of abstractions (corresponding to different hierarchy of the network). However, as the key contribution of this paper, we learn these representations in a progressive manner from high- to low-levels of abstraction. Our point, therefore, is not that learning hierarchical representations will help learning disentangled representations. Instead, we argue and demonstrate that the presented progressive learning strategy, incrementally extracting generative factors from high- to low-levels of abstraction, will help learning disentangled representations. \\n\\nWe would like to clarify that MNIST images in Fig 5 in [1] and MNIST images in this paper (Fig 6) are generated with two different processes. In Fig 5 in [1], the images were generated by traversing each dimension of the two-dimensional latent code in one layer along with random sampling from other layers. Therefore, the images generated appeared to change smoothly along the x and y axis. In comparison, MNIST images in this paper (Fig 6) were generated by random sampling in one layer while fixing the latent code in all the other layers. This generation strategy was identical to that used in generating Figure 6 and Figure 7 in [1]. To clear the reviewer\\u2019s concern, we have generated new MNIST examples following the same strategy as used in Fig 5 in [1] and add the results to the supplemental material. As shown, compared to Fig 5 [1], the generated images on MNIST appears to be better traversing across the digit type, while similar in generating other variations such as width and stroke.\\n\\nIndeed, the metric proposed in [2] shared a similar motivation to the metric presented in this paper. However, the approaches to the calculation of these two metrics were entirely different. The metric in [2] was calculated based on training a regressor function f, which is affected by the choice of regressors and its hyperparameters. The drawbacks of this type of approaches have been discussed in [4]. In comparison, the presented approach of metric calculation, similar to MIG, does not involve additional classifiers and is therefore unbiased for the hyperparameter settings. We have modified our paper to discuss [2] and its relation with the proposed metric. \\n\\nWe were uncertain about the criticism that \\u201cthe proposed metric requires ground truth for the generative factors, so its usage is limited and not practical.\\u201d To our knowledge, all recent metrics [2][4][5][6] proposed for measuring disentanglement require ground truth factors. We believe that the presented metric presents a necessary supplement to MIG to capture what is not measured therein, and we were able to demonstrate that in our experiments in Fig 3. \\n\\n[4] Disentangling by factorising. Hyunjik et. al. ICML 2018.\\n[5] Isolating sources of disentanglement in variational autoencoders. Chen et. al. NeurIPS 2018.\\n[6] beta-vae: Learning basic visual concepts with a constrained variational framework. Higgins et. al. ICLR 2017\"}",
"{\"title\": \"Re: Review1 (part2)\", \"comment\": \"In terms of the connection with [3], at a philosophical level, we acknowledge that our work and [3] loosely share a similar motivation that the most abstract representations can be learned first before others. However, the two approaches are entirely different, two of the most important differences being 1) the definition of the \\u201ccapacity\\u201d of the VAE and 2) the progressive learning strategy. First, the capacity in [3] was defined by the information capacity of the latent code and controlled by the KL divergence of the latent distribution to an isotropic Gaussian prior (nats). In comparison, the capacity in this paper is defined and controlled by the trainable parameters and growable architectures of the neural network. Second, the capacity in [3] was increased by gradually loosening the constraint on the KL divergence loss. In comparison, we propose a completely different strategy of progressive learning that incrementally increase the \\u201ccapacity\\u201d of the network by growing additional latent variables and new parameters of the model in the ladder connections. This was inspired by recent works in growing of neural network\\u2019s architectures (which we extend to growing the latent codes) and is completely different from the approach presented in [3].\\n\\nWe indeed attempted to demonstrate how the considered methods are affected by the hyperparameters beta, hence the results presented in Figs 2 and 3. We acknowledge that the presented method is not suitable for a high value of beta. We reason that, since the presented progressive learning strategies already promote disentangling, a high value of beta may over-promote disentangling at the expense of reconstruction quality. We have revised the paper to add this discussion. We however would like to note that, as shown in Fig 2, the presented method outperforms the baseline models with a clear margin in the majority of the hyperparameters tested.\\n\\nIn the original submission, we designed comparison studies with vanilla VAE, VLAE, the teacher-student model, and the presented method, with the intention for an ablation study that shows the effect of the two individual components: i.e., the hierarchical representations, and the progressive training strategy. We are currently working to include additional ablation studies that investigate the effect of the implementation strategies including the fade-in strategy, which we will include once completed. \\n\\nLast but not least, many thanks for point out the missing definition of v_k. We have added it in the paper.\"}",
"{\"title\": \"Additional comments\", \"comment\": \"I have additional comments on the proposed model.\\n\\nBased on Figures 4, 5, and 6, it seems that the first layer is not as well trained as VLAE. Therefore, as shown in Figure 5, it is necessary to add an experiment to compare mutual information of each layer with VLAE.\\n\\nThe authors claim that in the experiment with MNIST, the first layer learned the factors related to letter style. However, in Figure 6, it is difficult to determine whether the first layer is successfully trained. For the results of the experiment on MNIST, it would be helpful to measure the amount of mutual information for each layer.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a method for training Variational Ladder Autoencoder (VLAE) using a progressive learning strategy. In comparison to the generative model using a progressive learning strategy, the proposed method focuses not only on the image generation but also on extracting and disentangling hierarchical representation.\\n\\nOverall, I think the purpose of this paper should be written clearly. It is not clear whether the purpose is learning the disentangled representation or the hierarchical representation. In my opinion, I think the focus of the proposed method lies in the hierarchical representation through progressive learning, but the experiments are involved more with disentanglement. Furthermore, I believe the authors need to explain the relationship between hierarchical representation and disentangled representation. In particular, it is not clear why learning hierarchical representation is helpful for disentangled representations.\\n\\nThe qualitative experiments are not convincing since the proposed model looks worse in both the reconstruction and hierarchical disentanglement for MNIST dataset than the base model VLAE, as shown in Figure 5 in [1]. Regarding the metric used in the experiments, the authors mention that the proposed disentanglement metric MIG-sup is what they first developed for one-to-one property, but it seems that it was already proposed in [2]. In addition, the proposed metric requires ground truth for the generative factors, so its usage is limited and not practical.\\n\\nI think this work is similar to [3] in that both learn disentangled representations by progressively increasing the capacity of the model. I think the authors need to discuss about this work.\\n\\nAblation studies should be presented to verify the individual effects of the progressive learning method and implementation strategies on performance, respectively.\\n\\nIn Figures 2 and 3, the performance gap in the reconstruction error of the proposed method is greater than the base model when beta changes from 20 to 30. Therefore, it is necessary to show if it is robust against the hyperparameter beta. \\n\\nThere is no definition of v_k in Equation (12), so it is difficult to understand the proposed metric clearly.\\n\\nIn summary, I do not think the paper is ready for publication. \\n\\n[1] Learning Hierarchical Features from Generative Models, Zhao et al., ICML 2017\\n[2] A Framework for the Quantitative Evaluation of Disentangled Representations, Eastwood et al., ICLR 2018\\n[3] Understanding disentangling in beta-VAE, Burgess et al., NIPS 2017 Workshop on Learning Disentangled Representations\\n\\n\\n-------------------------------------\", \"after_rebuttal\": \"Thanks for the revision of the paper and the additional experiments.\\n\\nThe authors' comments and further experiments address most of my concerns. In particular, new experiments show that pro-VLAE performs quantitatively and qualitatively better than VLAE. Also, Figure 10 and the result of the information flow experiment using MNIST show that the first layer learns the intended representations properly.\\n\\nI appreciate the authors\\u2019 efforts put into the rebuttal, and the results of additional experiments are reasonably good. Therefore, I increase my final score to 6: Weak Accept.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduce pro-VLAE, an extension to VAE that promotes disentangled representation learning in a hierarchical fashion.\\nEncoder and decoder are made of multiple layers and latent variables are not only present in the bottleneck but also between intermediate layers; in such a way, it is possible to encode information at different scales, hence the hierarchical representation. Latent variables can be learned in an incremental way, by making them visible to the whole model progressively, so that as more latent variables become available, they encode lesser and lesser abstract factors.\\n\\nExperiments are carried out on two benchmarks for disentanglement with annotations and pro-VLAE is compared to other methods in the state of the art.\\nHere, the authors introduce an extension of the Mutual Information Gap (MIG) metric, namely MIG-sup: it penalizes when multiple generative factors are encoded in the same latent variable. Qualitative results are also shown for 2 non-annotated datasets.\\n\\nPROS\\n- The idea is fresh, well explained and experiments are sufficiently thorough. The novelty introduced is enough, provided that not much literature has explored progressive representation learning in the context of disentanglement.\\n- Results suggest that this is a promising direction for disentangling representations as pointed out by the authors in the conclusions.\\n- We appreciated the smart solutions for what concerns the implementation and training stabilization.\\n\\nCOMMENTS/IMPROVEMENTS\\nTo improve the quality of the paper, consider the following comments:\\n\\n- For the sake of completeness, experiments on Information flow should be also quantitative: it would be interesting to see how the information is captured by the latent variables on average on multiple runs, possibly trying different numbers of latent variables z_i.\\n- In sec 3.1 \\\"z from different abstraction\\\" is too vague and should be better formalized.\\n- In sec 2: \\\"the presented progressive learning strategy provides an entirely different approach to improve disentangling that is ORTHOGONAL to these existing methods and a possibility to augment them in the future.\\\": you should change to 'different'.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes an approach to incrementally learn hierarchical representations using a variational autoencoder (VAE). This is shown to be useful qualitatively and quantitatively in terms of disentanglement in the representations.\\n\\nTo learn the hierarchy, the authors use a ladder architecture based on variational ladder autoencoder (VLAE) but incrementally activate the lateral connections across the layers at varying depth of the encoder and the decoder. A vanilla VAE is first trained. Followed by adding stochastic later connections and then retraining the updated architecture. This combined with beta-VAE inspired upweighting of the KL term leads to learning a hierarchy of representations. Each level of the hierarchy, the representations are disentangled. \\n\\nInspired by progressive GANs, the authors employ ````\\\"fade-out\\\" when traversing the hierarchy. \\n\\nThe authors also introduce a new metric to capture the one-to-one mapping of the ground truth factors to the latent dimensions.\\n\\nAblation studies by varying/removing fadeout compared to incremental learning will be useful. Can fade-out (different weighting of each level) be added directly to VLAE without incremental learning? \\n\\nOverall the paper is well motivated and easy to read. The results look impressive and the learned hierarchy and latent traversals are convincing. A more thorough comparison with VLAE will make the paper stronger.\"}"
]
} |
r1g6ogrtDr | Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data | [
"David W. Romero",
"Mark Hoogendoorn"
] | Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g. an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10. | [
"Equivariant Neural Networks",
"Attention Mechanisms",
"Deep Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=r1g6ogrtDr | https://openreview.net/forum?id=r1g6ogrtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6Ffz514vcH",
"FHDVIbJfIr",
"SyxhYn43or",
"SJxppfLdjH",
"HyeUqyfDjH",
"SJlj21lIoS",
"S1eO4fn4ir",
"HkeuQBi4iS",
"HyezazUxjS",
"SkeC8XEloS",
"HJl2HyEgoB",
"SJlMDm7eoS",
"rJgHp0GgjH",
"H1eJfdke5r",
"Bygx8wipFS",
"ryguPqudYH"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1695383617633,
1576798750976,
1573829763554,
1573573316559,
1573490574280,
1573416883440,
1573335600417,
1573332256486,
1573049017931,
1573040981874,
1573039939734,
1573036889750,
1573035709059,
1571973127139,
1571825479531,
1571486303544
],
"note_signatures": [
[
"~Gladis_Ne_Limes1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2517/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2517/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2517/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"re\", \"comment\": \"I recommend read this blog about software development in warehousing https://mlsdev.com/blog/technology-in-warehousing\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an attention mechanism for equivariant neural networks towards the goal of attending to co-occurring features. It instantiates the approach with rotation and reflection transformations, and reports results on rotated MNIST and CIFAR-10. All reviewers have found the idea of using self-attention on top of equivariant feature maps technically novel and sound. There were some concerns about readability which the authors should try to address in the final version.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of changes during the rebuttal period\", \"comment\": \"Dear reviewers,\\n\\nOnce again, thank you very much for your helpful and thorough reviews, which resulted in a largely improved version of our work. \\n\\nIn this comment we provide a brief summary of the changes done to our work during this rebuttal time.\\n\\n*Changes:*\\n----------------\\n- We modified the notation utilized across the document such that: (1) it is easier to read and (2) easier to connect to existing literature in the topic.\\n- We included a visual representation of how our algorithm works.\\n- We modified definitions and restructured the paper such that its readability and congruence was largely improved.\\n- Based on the discussions with Reviewer 2, we included in the appendix an additional section with a meticulous discussion about how co-occurrent attention is obtained via the proposed framework.\\n- Additional relevant references were incorporated.\\n\\nThank you very much for your time and your attention.\\n\\nBest regards,\\nThe authors.\"}",
"{\"title\": \"Illustrated explanation\", \"comment\": \"Dear reviewer 2,\\n\\nHere's a link to a graphical explanation of the attention mechanism:\", \"https\": \"//www.dropbox.com/s/h9dbzija8ml2646/attention_graphical.png?dl=0\\nThere are 3 matrices of 4x4 attention weights one for each of eye, mouth, eyes.\\n\\n*Connections to our previous response:*\\nThe equivariance property of the mapping assures that all of the $\\\\Lambda$ feature maps (here mouth, nose, eyes) move \\\"synchronously\\\" as a function of a rotation in the input. Resultantly, there is already an implicit constraint regarding how $f_{R}$ behaves along the $\\\\lambda$ axis. Consequently, if one rotates the input of a layer, all the $\\\\Lambda$ attention instances beautifully synchronously rotate their \\\"attention mask\\\" accordingly.\\n\\n*Extension on (b):*\\nWe will add a section in the appendix showing that a linear mapping of the form $xA$ is equivariant to cyclic permutations iff $A$ is a circulant matrix. \\n\\n**EDIT:** We found that it has already been proven that a matrix $A$ is circulant if and only if it is equivariant with respect to cyclic shifts of the domain and hence it is unnecessary to introduce a self-developed proof. We will introduce the corresponding references in our derivations. \\n\\n\\nWe hope that our answer sheds more light into the functionality of our algorithm.\\n\\nIf you have any comment about this or the other questions stated above, we are happy to hear them :)\\n\\nBest regards,\\nThe authors.\"}",
"{\"title\": \"Can you illustrate it?\", \"comment\": \"Thanks for the quick response. Unfortunately I cannot wrap my head around your explanation. I even discussed it with a colleague and we couldn't figure it out together.\\n\\nCould you try to illustrate, using the toy example from Fig. 2, how attention acts and what values the attention matrices A would assume in the case of this toy example? In other words, illustrate graphically (a) what is the dimensionality of the attention tensor in a given layer, (b) how are weights shared due to cyclical constraints, (c) based on what tensor of (d) what dimensionality is it computed, and (e) how does it manipulate the activations. \\n\\nIt seems like a diagram with boxes and (annotated) arrows would do the job way better than pages of math.\"}",
"{\"title\": \"Response to most crucial point\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you very much for your fast response. \\n\\nExcuse us for not responding your question yet. Our intention was to first modify the notation in the paper itself so that it is easier to address your question.\", \"q\": \"The motivation for the attention mechanism (as discussed in the introduction and illustrated in Fig. 2) seems to be to find patterns of features which commonly get activated together (or often co-occur in the training set). However, according to Eq. (9), attention is applied separately to orientations of the same feature ( is indexed by i, the channel dimension), and not across different features. Since the attention is applied at each spatial location separately, such mechanism only allows to detect patterns of relative orientations of the same feature appearing at the same spatial location. The motivation and utility of such formulation is unclear, as it appears to be unable to solve the toy problem laid out in Fig. 2. Please clarify how the proposed mechanism would solve the toy example in Fig. 2.\", \"a\": \"We understand your concern. In fact a direct approach should address the problem illustrated in the introduction and Fig. 2 by applying an attention mechanism that acts simultaneously on $(r, \\\\lambda)$ as you correctly describe. However, we are able to simplify the problem by taking advantage of the equivariance property of the network.\\n\\nConsider $p$, the input to of a roto-translational conv. $f_{R}: Z^{2} \\\\rightarrow \\\\Theta \\\\times Z^{2} \\\\times \\\\Lambda $ as outlined in eq. 3 in the new submission, and $\\\\Theta$ be the set of rotations of 90 degrees. i.e. $\\\\text{dim}(\\\\Theta)=4$ (in general $\\\\Theta$ is also included in the domain of $f_{R}$ but we remove it for simplicity; i.o.w. $f_{R}$ is located at the first layer of a neural network). Now, let $A = \\\\{\\\\{f_{R}(p)(u)\\\\}_{r=1}^{4}\\\\}_{\\\\lambda=1}^{\\\\Lambda}$ be the matrix of dimension $4 \\\\times \\\\Lambda$ at a certain position $u$, consisting of the $4$ oriented responses for each of the $\\\\Lambda$ learned representations. \\n\\nDue to the fact that the the vectors $\\\\{f_{R}(p)(u)\\\\}_{r=1}^{4}(\\\\lambda)$ permute cyclically as a result of $f_{R}$, it is mandatory, as outlined in the paper, to ensure equivariance to cyclic permutations in each $A(\\\\lambda)$. At first sight, one might think that there's no connection between multiple $\\\\lambda$'s in $A$, and, therefore, in order to exploit co-occurences, one must impose additional constraints along the $\\\\lambda$ axis. Extremely interestingly, however, is the fact that not just one mapping but the entire layer (i.e. all $\\\\Lambda$ mappings) behaves predictively as a result of equivariance and, hence, there is already an implicit restriction between mappings along $\\\\lambda$. \\n\\nConsider, for instance, the input $\\\\theta_{i} p$, a $\\\\theta_{i}$-rotated version of $p$. By virtue of the equivariance property of $f_{R}$, we have (locally) that $f_{R}(\\\\theta_{i} p) = \\\\mathcal{P}^{i}(f_{R}(p))$ and thus $f_{R}(\\\\theta_{i} p)(u,r,\\\\lambda)= \\\\mathcal{P}^{i}(f_{R}(p)(u, r, \\\\lambda)) \\\\forall \\\\lambda \\\\in \\\\Lambda$. In other words, the equivariance property of the mapping assures that all of the $\\\\Lambda$ feature maps move \\\"synchronously\\\" as a function of a rotation in the input. Resultantly, there is already an implicit constraint regarding how $A$ behaves along the $\\\\lambda$ axis. Note that if we have an equivariant attention mechanism $\\\\mathcal{A}$, $\\\\mathcal{A}(f_{R}(\\\\theta_{i} p))(u,r,\\\\lambda)= \\\\mathcal{P}^{i}(\\\\mathcal{A}(f_{R}(p))(u, r, \\\\lambda)) \\\\forall \\\\lambda \\\\in \\\\Lambda$ must hold as well. As a result, all of the $\\\\Lambda$ attention mechanisms applied along $r$ must move \\\"synchronously\\\" as a function of a rotation in the input. \\n\\nAs a matter of fact and due to computational reasons, we utilize in our implementation a matrix with the same form of $A$ to store the coefficients of our attention mechanism (since each $\\\\tilde{A}$ in $\\\\mathcal{A_{\\\\lambda}^{C}}$ is actually fully defined by a vector). Very interesting is it to see that if one rotates the input of a layer, all the $\\\\Lambda$ instances $\\\\mathcal{A_{\\\\lambda}^{C}}$ beautifully synchronously rotate their \\\"attention mask\\\" accordingly. \\n\\nWe hope that our answer sheds a bit more light into the behavior of our algorithm.\\n\\nIf you have any comment about this or the other questions stated above, we are happy to hear them :)\\n\\nBest regards,\\nThe authors.\"}",
"{\"title\": \"Most crucial point not addressed\", \"comment\": \"Please try to address my main concern, ideally during the discussion period in case I am misunderstanding something:\\n\\n1. The motivation for the attention mechanism (as discussed in the introduction and illustrated in Fig. 2) seems to be to find patterns of features which commonly get activated together (or often co-occur in the training set). However, according to Eq. (9), attention is applied separately to orientations of the same feature ( is indexed by i, the channel dimension), and not across different features. [...] The motivation and utility of such formulation is unclear, as it appears to be unable to solve the toy problem laid out in Fig. 2. Please clarify how the proposed mechanism would solve the toy example in Fig. 2.\"}",
"{\"title\": \"On the updated revision\", \"comment\": \"Dear reviewers,\\n\\nWe have submitted a new revision of our work. In this comment, we summarize the performed changes.\\n\\n* Mayor Changes *\\n---------------------------\\n- We modified the notation utilized across the document such that: (1) it is easier to read and (2) easier to connect to existing literature in the topic ( Based on Rev. 1, 2, 3)\\n- We reduced the extension of the paper (Based on Rev. 1)\\n- Substantial changes in the *Learning equivariant neural networks* subsection (Based on Rev. 3)\\n\\n* Other Changes *\\n--------------------------\\n- Clearer definitions provided (Based on Rev. 2, 3)\\n\\n* Questions to the reviewers*\\n------------------------------------------\\n1. Regarding the visualization of the model: \\nReviewer 2 correctly signalized that we lack providing insights into the model. Unfortunately, we are having a hard time finding a good way to do so and we would like to ask the reviewers for further guidance.\", \"possibilities_we_have_thought_of\": \"- We could show the effect of rotating the input and analyzing differences in the output. However, since it is proven that equivariance holds, this provides no further insights into the network. \\n- We could show the effect of the attention mechanism in the model, i.e. maps before attention and after attention. However, since the model has been optimized to utilize softmax in the process, we believe that showing the effects of the softmax on feature maps might provide few insight as well. \\n- As co-attentive and conventional networks are solving two different optimization problems, we find it very difficult to provide a fair comparison between learned feature representations (since they are not restricted to be similar). \\n\\nWe would sincerely appreciate any further thoughts or comments on this issue.\\n\\n2. Regarding the performance of the model:\\nReviewer 2 signalized his concern about the obtained results. In order to evaluate the real contribution of our method, we have performed further experiments with hyperparameter optimization. By doing so, we have obtained better results on a-P4CNN (around 1.85% test error). We believe that by performing hyperparameter optimization for the remaining models we will boost the performance measurements obtained so far.\", \"our_question_is\": \"Are these experiments a good way to address these concerns? If so, we will continue with our experiments and update our results in a subsequent submission. If this is not the case, we would like to ask you for ways to address them properly.\\n\\n* Expected additions in the future *\\n--------------------------------------------------\\n- We are working on proving that a mapping is equivariant to cyclic permutations iff it possesses the form of a circulant matrix. \\n- Visualization of the method itself will be provided\\n- Hopefully visualizations and comparisons on the learned feature representations. \\n\\n*Important*\\n------------------\\nIf we missed something you find important in our update. Please let us know. \\n\\n\\nIn advance, thank you very much for your time. We are looking forward to hearing from you.\\n\\nBest regards,\\nThe authors.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Dear reviewer 1,\\n\\nFirst of all thank you very much for thorough review and, of course, for your time. Thank you very much for supporting our work as well.\\n\\n1. *Regarding your observations*:\\n\\n1.1: \\\"The one shortcoming of the paper is that it takes a simple idea and makes it somewhat difficult to follow through cumbersome notation and over-mathmaticization. The ideas presented would be much clearer as an algorithm or more code-like representation as opposed to as equations.\\\"\", \"a\": \"This is very related to the last question. This will be addressed by following the previous statement.\\n\\nOnce again, thank you very much for your time, attention and extremely useful commentaries. Please let us know if you have any further questions or comments. We are happy to respond them all :) \\n\\nBest regards,\\nThe Authors.\\n\\nTaco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990\\u20132999, 2016.\\nIan Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016\\nJunying Li, Zichen Yang, Haifeng Liu, and Deng Cai. Deep rotation equivariant network. Neurocomputing, 290:26\\u201333, 2018\"}",
"{\"title\": \"First thoughts [Continuation]\", \"comment\": \"2.3: The exposition and notation in section 3.1 is very hard to follow and requires substantial improvement. For instance, the sections \\\"Attention and self attention\\\" and \\\"Compact local self attention\\\" seem to abstract from the specific case and use x and y, but it is unclear to me what x and y map to specifically. Maybe also provide a visualization of how exactly attention is applied.\", \"a\": \"I agree that it would be a good practice. However, note that we utilized the same evaluation measurements as the corresponding baselines (e.g. we provide std.dev. in rot-MNIST when using G-Convs, just as Cohen and Welling, (2016) did). To provide a fair comparison in CIFAR-10, we would need to re-do multiple experiments of the baselines as well. If time suffices, we will perform these experiments as well.\\n\\nOnce again, thank you very much for your time, attention and extremely useful commentaries. Please let us know if you have any further questions or comments. We are happy to respond them all :) \\n\\nBest regards,\\nThe Authors.\", \"q\": \"It would be good to provide the standard deviation for the reported results on CIFAR-10 to see if the improvement is significant.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Dear reviewer 2,\\n\\nFirst of all thank you very much for thorough review and, of course, for your time. Thank you very much for supporting our work as well.\\n\\n1. Regarding *weaknesses*\", \"q\": \"No visualisation or insights into the attention mechanism are provided\", \"a\": \"Our implementation is based on the implementations found online for each of the baselines. For DREN (Li et. al., 2018) we utilize the code provided for Caffe (rot-MNIST) and Tensorflow (CIFAR) (https://github.com/ZJULearning/DREN). The results reported in Table 1 are based on these implementations. We utilize their training strategies as faithfully as possible.\\n\\nFor G-CNNs (Cohen and Welling, 2016) we re-implemented those provided in https://github.com/tscohen/gconv_experiments/tree/master/gconv_experiments, since we needed to perform code multiple updates, as it is written in a now obsolete version of chainer. We do utilize GrouPy (https://github.com/tscohen/GrouPy ) with PyTorch support in our experiments (provided by the authors and other contributors). \\n\\nOur reported results emerge from this implementation. We did contact the authors for further information about the experiments with ResNet44. As of now, we are running further experiments based on additional information kindly provided by them and hope to be able to update these results in our the following version of our work. Note that our \\\"implementation problem\\\" boils down to the vanilla ResNet44 and therefore has nothing to do with their proposed method. \\n\\nRegarding DREN nets, Li et. al., (2018) do not report results in CIFAR-10 for fully equiv. networks. This is due to the fact that the performance strongly degrades when adding more isotonic layers (see Li et. al. (2018) for further details). In our experiments we modify their training setting, to allow the same hyperparameter settings to be used for all methods (i.e., vanilla NIN, r_x4, a-r_x4) and so, provide a reliable comparison. This is why the reported values strongly differ in these cases (please see *Training convergence of equivariant networks* in our work for further details). \\n\\nIan Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016\\n\\nJunying Li, Zichen Yang, Haifeng Liu, and Deng Cai. Deep rotation equivariant network. Neurocomputing, 290:26\\u201333, 2018\"}",
"{\"title\": \"First thoughts [Continuation]\", \"comment\": \"Q: In equation 8 (second equality), you have said f_R^l(F^l) = A(f_R^l(F^l)). How can this be true if A is not the identity? Giving the benefit of the doubt, this could just be a typo.\", \"a\": \"Our approach is more general and it is actually extendable to any group producing cyclic permutations as a result of the equivariant mapping. We believe that your way of defining the paper flow is very good and we actually intended to to that in our paper as well: starting from an (1) equiv. net in general, then (2) stating that we use attention on top of it, (3) imposing rules/modifications on the self-attention mechanism used and, finally, (4) adding constraints to the parameters learned in the attention mechanism block.\\nWe hope that after implementing the reviewers observations and utilizing simpler notation, the paper flow will be much clearer and easier to follow and it will be easier to connect to existing literature in the topic.\\n\\nOnce again, thank you very much for your time, attention and extremely useful commentaries. Please let us know if you have any further questions or comments. We are happy to respond them all :) \\n\\nBest regards,\\nThe Authors.\", \"q\": \"I think it would have been easier just say that you are using a roto-translation or p4m equivariant CNN with attention after each convolution. Then you could derive the constraint on the attention matrices to maintain equivariance. It would be easier to follow and make easy connections with existing literature on the topic.\"}",
"{\"title\": \"First thoughts\", \"comment\": \"Dear reviewer 3,\\n\\nFirst of all thank you very much for thorough review and, of course, for your time. \\n\\n1. We consistently obtained from all the reviewers the observation that the notation and technical jargon utilized to explain our approach was excessive. We agree with this. \\n\\nOur actual motivation to utilize the current notation comes from the *Default Notation* section of the conference template's ( https://github.com/ICLR/Master-Template/blob/master/archive/iclr2020.zip ). Here, it is encouraged to use the notation utilized in Godfellow et. al. (2016), which (to some extent) contributes to the excessive technical jargon. Several of the equations appearing in our *Section 2* are indeed slight modifications of equations appearing in the *Convolutional Networks* chapter of Godfellow et. al. (2016), which we subsequently utilize to define our own approach.\\n\\nFurthermore, we accept that we did not introduce relevant terms properly. This is also related to the fact that we understood the provided notation as (intended to be) standard for all submissions and, hence, we did not feel obliged to introduce notation defined in there (e.g. row-vector convention). Naturally, this also contributes to the non-clarity of the derivations.\\n\\nThat said, we accept that the aforementioned facts are not the only reasons contributing to non-clarity. We will work hard at making them much clearer in a subsequent version of our work. We will utilize a simpler, well-introduced notation as well. \\n\\n2. Regarding your arguments:\\n- We will be careful to introduce all terms utilized in the text. e.g. \\\"co-occurrence\\\". \\n- We will add further explanations as to what is to be \\\"optimal in the set of transformations that co-occur\\\". \\n\\n3. Regarding your questions:\", \"q\": \"Now that we have automatic differentiation, is the section on how to work out the gradients in Equations 5-7 really necessary?\", \"a\": \"It is intended to show that parameters receive more information as more weight-tying is utilized. Additionally, it sheds light on the behavior of DREN networks (Li et al., 2018) in the section *Training convergence of equivariant networks*. We will consider perhaps moving this section to the appendix.\\n\\n[1] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016\\nJunying Li, Zichen Yang, Haifeng Liu, and Deng Cai. Deep rotation equivariant network. Neurocomputing, 290:26\\u201333, 2018\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper describes an approach to applying attention in equivariant image classification CNNs so that the same transformation (rotation+mirroring) is selected for each kernel. For example, if the image is of an upright face, the upright eyes will be selected along with the upright nose, as opposed to allowing the rotation of each to be independent. Applying this approach to several different models on rotated MNIST and CIFAR-10 lead to smaller test errors in all cases.\\n\\nOverall, this is a good idea that appears to be well implemented and well evaluated. It includes an extensive and detailed bibliography of relevant work. The approach seems to be widely applicable. It could be applied to any deep learning-based image classification system. It can be applied to additional transformations beyond rotation and mirroring.\\n\\nThe one shortcoming of the paper is that it takes a simple idea and makes it somewhat difficult to follow through cumbersome notation and over-mathmaticization. The ideas presented would be much clearer as an algorithm or more code-like representation as opposed to as equations. Even verbal descriptions could suffice. The paper is also relatively long, going onto the 10th page. In order to save space, some of the mathematical exposition can be condensed.\\n\\nIn addition, as another issue with clarity, the algorithm has one main additional hyperparameter, r_max, but the description of the experiments does not appear to mention the value of this hyperparameter. It also states that the rotated MNIST dataset is rotated on the entire circle, but not how many fractions of the circle are allowed, which is equivalent to r_max.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"[Update after rebuttal period]\\n\\nWhile I still find the paper somewhat hard to parse, the revision and responses have addressed most of my concerns. I think this paper should be accepted, because it presents a novel and non-trivial concept (rotation-equivariant self attention).\\n\\n\\n[Original review]\\n\\nThe authors propose a self-attention mechanism for rotation-equivariant neural nets. They show that introduction of this attention mechanisms improves classification performance over regular rotation-equivariant nets on a fully rotational dataset (rotated MNIST) and a regular non-rotational dataset (CIFAR-10).\", \"strengths\": [\"States a clear hypothesis that is well motivated by Figs. 1 & 2\", \"Appears to accomplish what it claims as contributions\", \"Demonstrates a rotation-equivariant attention mechanism\", \"Shows that its introduction improves performance on some tasks\"], \"weaknesses\": [\"Unclear how the proposed attention mechanism accomplishes the goal outlined in Fig. 2d\", \"Performance of the authors' evaluations of the baselines is lower than reported in the original papers, casting some doubt on the performance evaluation\", \"The notation is somewhat confusing and cumbersome, making it hard to understand what exactly the authors are doing\", \"No visualisation or insights into the attention mechanism are provided\", \"There are three main issues detailed below that I'd like to see addressed in the authors' response and/or a revised version of the paper. If the authors can address these concerns, I am willing to increase my score.\", \"1. The motivation for the attention mechanism (as discussed in the introduction and illustrated in Fig. 2) seems to be to find patterns of features which commonly get activated together (or often co-occur in the training set). However, according to Eq. (9), attention is applied separately to orientations of the same feature ($A_i$ is indexed by i, the channel dimension), and not across different features. Since the attention is applied at each spatial location separately, such mechanism only allows to detect patterns of relative orientations of the same feature appearing at the same spatial location. The motivation and utility of such formulation is unclear, as it appears to be unable to solve the toy problem laid out in Fig. 2. Please clarify how the proposed mechanism would solve the toy example in Fig. 2.\", \"2. The only real argument that the proposed mechanism is useful are the numbers in Table 1. However, the experimental results for CIFAR-10 are hard to compare to the baselines because of differences in reported and reproduced results. I would appreciate a clarification about the code used (was it published by the authors of other papers?) and discussion of why the relative improvement achieved by the proposed method is not an artefact of implementation or optimisation issues.\", \"3. The exposition and notation in section 3.1 is very hard to follow and requires substantial improvement. For instance, the sections \\\"Attention and self attention\\\" and \\\"Compact local self attention\\\" seem to abstract from the specific case and use x and y, but it is unclear to me what x and y map to specifically. Maybe also provide a visualization of how exactly attention is applied.\", \"Minor comments/questions:\", \"If the attention is applied over the orientations of the same feature, why does it improve the performance on Rotated MNIST (which is rotation invariant)?\", \"I assume the attention matrix $A_i$ is different for each layer, because the features in different layers are different and require different attention mechanisms. However, unlike F and K, A is not indexed by layer l.\", \"It would be good to provide the standard deviation for the reported results on CIFAR-10 to see if the improvement is significant.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"[Post-rebuttal update]\", \"Having read the rebuttals and seen the new draft, the authors have answered a lot of my concerns. I am still unsatisfied about the experimental contribution, but I guess producing a paper full of theory and good experiments is a tall ask. Having also read through the concerns of the other reviews and the rebuttal to them, I have decided to upgrade my review to a 6.\", \"*Paper summary*\", \"The paper combines attention with group equivariance, specifically looking at the p4m group of rotations, translations, and flips. The basic premise is to use a group equivariant CNN of, say, Cohen and Welling (2016), and use self-attention on top. The authors derive a form of self-attention that does not destroy the equivariance property.\", \"*Paper decision*\", \"I have decided that the paper be given a weak reject. The method seems sound and I think this in itself is a great achievement., But the experiments lack focus. Just showing that you get better accuracy results does not actually test why attention helps in an equivariant setting. That said, I feel the lack of clarity in the writing is actually the main drawback. The maths is poorly explained and the technical jargon is quite confusing. I think this can be improved in a camera-ready version or in submission to a later conference, should overall acceptance not be met.\", \"*Supporting arguments*\", \"I enjoyed the motivation and discussion on equivariance from a neuroscientific perspective. This is something I have not seen much of in the recent literature (which is more mathematical in nature) and serves as a refreshing take on the matter. There was a good review of the neuroscientific literature and I felt that the conclusions, which were draw (of approximate equivariance, and learned canonical transformations) were well motivated by these paper.\", \"The paper is well structured. That said, I found the clarity of the technical language at times quite difficult to follow because terms were not defined. By way of example, I still have trouble understanding terms like \\u201cco-occurence\\u201d or \\u201cdynamically learn\\u201d. In the co-occurence envelope hypothesis, for instance, what does it mean for a learned feature representation to be \\u201coptimal in the set of transformations that co-occur\\u201d. Against what metric exactly would a representation be optimal? This is not defined.\", \"That said, I feel that the content and conclusions of the paper are technically sound, having followed the maths, because the text was too confusing.\", \"*Questions/notes for the authors*\", \"I would like to know whether the co-occurence envelope hypothesis is the authors\\u2019 own contribution. This was not apparent to me from the text.\", \"I\\u2019m not sure what exactly the co-occurence envelope is. It does not seem to be defined very precisely. What is it in layman\\u2019s terms?\", \"I found the section \\u201cIdentifying the co-occurence envelope\\u201d very confusing. I\\u2019m not sure what the authors are trying to explain here. Is it that a good feature representation of a face would use the *relevant* offsets/rotations/etc. of visual features from different parts of the face, independent of global rotation?\", \"Is Figure 1 supposed to be blurry?\", \"At the end of paragraph 1 you have written: sdfgsdfg asdfasdf. Please delete this.\", \"I believe equation 4 is a roto-translational convolution since it is equivariant to rotation *and translation*. Furthermore, it is not exactly equivariant due to the fact that you are defining input on a 2D square grid, but that is a minor detail in the context of this work.\", \"Now that we have automatic differentiation, is the section on how to work out the gradients in Equations 5-7 really necessary?\", \"In equation 8 (second equality), you have said f_R^l(F^l) = A(f_R^l(F^l)). How can this be true if A is not the identity? Giving the benefit of the doubt, this could just be a typo.\", \"Please define \\\\odot (I think it\\u2019s element-wise multiplication).\", \"Are you using row-vector convention? That would resolve some of my confusion with the maths.\", \"You define the matrix A as in the space [0,1]^{n x m}. While sort of true, it is more precise to note that each column is actually restricted to a simplex, so A lives in a subspace of [0,1]^{n x m}.\", \"I think it would have been easier just say that you are using a roto-translation or p4m equivariant CNN with attention after each convolution. Then you could derive the constraint on the attention matrices to maintain equivariance. It would be easier to follow and make easy connections with existing literature on the topic.\"]}"
]
} |
r1lnigSFDr | Improving the Gating Mechanism of Recurrent Neural Networks | [
"Albert Gua",
"Caglar Gulcehre",
"Tom le Paine",
"Razvan Pascanu",
"Matt Hoffman"
] | In this work, we revisit the gating mechanisms widely used in various recurrent and feedforward networks such as LSTMs, GRUs, or highway networks. These gates are meant to control information flow, allowing gradients to better propagate back in time for recurrent models. However, to propagate gradients over very long temporal windows, they need to operate close to their saturation regime. We propose two independent and synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyper-parameters, and are aimed at improving learnability of the gates when they are close to saturation. Our proposals are theoretically justified, and we show a generic framework that encompasses other recently proposed gating mechanisms such as chrono-initialization and master gates . We perform systematic analyses and ablation studies on the proposed improvements and evaluate our method on a wide range of applications including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning. Empirically, our proposed gating mechanisms robustly increase the performance of recurrent models such as LSTMs, especially on tasks requiring long temporal dependencies. | [
"recurrent neural networks",
"LSTM",
"GRUs",
"gating mechanisms",
"deep learning",
"reinforcement learning"
] | Reject | https://openreview.net/pdf?id=r1lnigSFDr | https://openreview.net/forum?id=r1lnigSFDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"seWRbvb0vy",
"r1la3UHzoB",
"rkewrIrMsS",
"BJxBlUSfsr",
"rye_5HHzsr",
"rkl3__O0tr",
"S1llO2HCKH",
"HkgP_s1RYr",
"rJxC9rZ2FH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1576798750945,
1573177012520,
1573176894548,
1573176812644,
1573176720355,
1571879027535,
1571867752309,
1571842926849,
1571718550478
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2516/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2516/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2516/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2516/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2516/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2516/AnonReviewer1"
],
[
"~Super_User1"
],
[
"ICLR.cc/2020/Conference/Paper2516/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This submission proposes a new gating mechanism to improve gradient information propagation during back-propagation when training recurrent neural networks.\", \"strengths\": \"-The problem is interesting and important.\\n-The proposed method is novel.\", \"weaknesses\": \"-The justification and motivation of the UGI mechanism was not clear and/or convincing.\\n-The experimental validation is sometimes hard to interpret and the proposed improvements of the gating mechanism are not well-reflected in the quantitative results.\\n-The submission was hard to read and some images were initially illegible.\\n\\nThe authors improved several of the weaknesses but not to the desired level.\\n\\nAC agrees with the majority recommendation to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We thank the reviewer for pointing out potentially confusing aspects of the submission, which we address below.\\n\\n>>> a. In Figure 3(a), where are the other baselines? Are they performing too badly so that they can not show up in the figure? It needs more explanation.\\n b. In Figure 3(b), actually a lot of methods are performing similar, while some methods converge similarly. What is the reason?\\n\\nSome baselines seem to not show up because their curves are actually overlapping. This is because the data in our experiments were controlled between methods (i.e., every method saw the same training and testing minibatches for a given seed) to reduce potential variance. Consequently, models that were unable to make any progress on this task showed identical performance. We have added these details in the caption and in Appendix [].\\n\\nWe have additionally re-plotted these results on a log scale, which distinguishes the learning curves better.\\n\\n\\n>>> There are several parts in the experiment that are not very convincing.\\nAside from details in Figure 3, could you point to other experiments that could be clarified?\\n\\n\\n>>> It is not defined why the uniform gate initialization works.\\nPlease see our shared response to all reviewers.\\n\\n\\n>>> The proposed results actually not always perform the best. For instance, in Table 3, purely using the UR-LSTM only achieve good results on sMNIST. What is the reason? The proposed method seems not very general.\\n\\nWe respectfully disagree with this conclusion. In response to Table 3:\\n - First, there was a typo that has been fixed: the \\u201cGRU + Zoneout\\u201d method that achieved SOTA on sCIFAR is actually our method \\u201cUR-GRU + Zoneout\\u201d. We apologize for the confusion this may have caused\\n - On sCIFAR, Table 3 shows that the UR-LSTM by itself improves on a vanilla LSTM or Transformer by almost 10% accuracy. It is also the best recurrent model on pMNIST already\\n - As mentioned in that section, our gate modifications synergize with other methods such as regularization and auxiliary losses. Table 3 shows how it synergizes with a regularization technique to achieve SOTA on sCIFAR, and we expect that the auxiliary loss of the r-LSTM (Trinh et al. 2018) can be applied to the UR-LSTM for further improvements on all datasets\\n - We show non-recurrent models for completeness, but they are not directly comparable to ours. For example, they should be expected to be better on permuted datasets due to having global receptive field\\n\\nOverall, we believe that our method is very general and robust, which is shown throughout the Experiments section, where it improves on multiple types of recurrent models across many different domains.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We thank the reviewer for the helpful feedback and suggestions, which we respond to below.\\n\\n>>> The writing in the description of the UGI and refine fate is not clear.\\n\\nWe have improved the motivation and description of UGI and the refine gate in the revision. Further feedback on how they can be further clarified is appreciated.\\n\\n>>> The authors compares UGI to standard initialization but where is the standard initialization?\\n\\nStandard initialization was stated to be initializing the bias term to a constant; for example linear models usually initialize the bias to 0 (Section 2.2). In our experiments, the vanilla LSTM used 1.0 bias initialization, which has become the standard for LSTMs (Gers et al.). This detail has been clarified in Section 2.2 and 3.\\n\\n>>> I am not convinced how the UGI gate help avoid decaying of the input\\n\\nPlease see our shared response to all reviewers.\\n\\n\\n>>> On propositions\\n\\nThe central principle of gated RNNs is that the values of gate activations control the timescales that the model can address. Propositions 2 and 3 formalize how our gate modifications affect the timescales of the recurrent model. However, we agree that Propositions 2 and 3 are somewhat tangential to the main points of the paper and have moved them to Appendix B.2.\\n\\n\\n>>> Even though the title of the paper is \\\"improving the gating mechanism of recurrent neural networks\\\", the authors try to solve signal propagation problems. It is unclear why \\\"gate\\\" is important.\\n\\nThe above principle means that the performance of gated RNNs (which are the dominant form of RNN, for reasons outlined in Section 1 and 2) is closely tied to the activations and learnability of their gates. Thus our improvements to the gates improve recurrent models at large. We hope that our revised introduction and background sections have clarified this motivation.\\n\\n\\n>>> Minors\\n\\nThanks for pointing out several mistakes which have now been fixed. In particular, we have used the suggestion to use the Hadamard multiplication symbol to emphasize the elementwise multiplication, for example in Equation (1).\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate the reviewer\\u2019s detailed reading of the paper and thoughtful comments, suggestions, and questions. We have responded to these below.\\n\\n\\n>>> This manuscript already quite long and has several formatting issues... \\n\\nWe agree with the suggestions and have improved the labels and captions of many figures. We have updated Figure 3 on a log scale which helps distinguish the curves. Figure 5 conveys information that a table of numbers may not be able to; for example, it shows that the UR-LSTM converges both faster and to a higher maximum than other methods, and accounts for confidence intervals. We will continue revising Figure 2 for readability.\\n\\n\\n>>> I think that for this approach to work, two conditions need to be satisfied (a) there must be foreseeable improvements in the use of a forget gate that can reach values close to 0/1 for the task at hand and (b) r_t needs to function well despite not being too close to 0 or 1 (lest its parameters suffer from gradient flow issues)\\n * Was there any visualizations done on whether (a) happened? i.e. for the URLSTMs that performed well, were the values of the forget gate closer to 0/1 than the baselines?\\n\\nWe agree with these insights. We believe that Figure 4 and Figure 9 show clear empirical support for these principles, in particular that\\n * How close the forget gate activations are to 1.0 affect whether the models are able to solve this memory task\\n * The addition of a refine gate improves the ability of the model to learn more extremal activations, enabling it to solve the task\\n\\n\\n>>> What were typical values of r_t, did the models need the refine gate to reach values close to 0 or 1 for the overall approach to work?\\n\\nWe note that as long as one of the gates is not saturated (either the original gate or the refine gate), the model can still learn.\\nWe did not find any scenarios empirically in which the refine gate saturated and prevented the overall approach from working.\\n\\n\\n>>> While I'm not entirely convinced about the proposed initialization scheme...\\n\\nWe believe that in addition to the refine gate, UGI is also quite motivated and shows clear empirical benefits, which we summarized in the shared response to all reviewers.\"}",
"{\"title\": \"Updated Paper and Response to Reviews\", \"comment\": [\"We thank all the reviewers for their comments and feedback. We have uploaded a revised draft of the paper which we believe substantially improves the clarity of the submission and addresses the concerns that the reviewers raised. We highlight the most important changes below, as well as address shared feedback among the reviewers.\", \"*Presentation and Formatting*\", \"We apologize for the presentation issues and have fixed them to improve readability, including\", \"Increased line spacing and header separation and larger table fonts\", \"Larger and more readable figure labels\", \"Improved notation for gates\", \"*Exposition*\", \"We have expanded on the motivation and description of our methods in Sections 1 and 2, and added a subsection (2.1) that summarizes the overall model with explicit equations for the UR-LSTM.\", \"We have added a brief discussion about the overall effectiveness of our methods (Section 3.5). In summary, we believe that the robust empirical improvement of our simple modification over the time-tested LSTM is an important contribution in itself.\", \"*Uniform gate initialization*\", \"We have clarified the motivation and description of UGI, which can be seen as a hyperparameter-free heuristic for improving initialization of the gates by letting neurons forget at different rates.\", \"Empirically, we point out that UGI shows improvements on many experiments\", \"On the Copy task, the refine gate alone (R-LSTM) is stuck at baseline. Simply changing the initialization to UGI (UR-LSTM) solves the task very quickly\", \"On sequential image classification, we have added the R-LSTM ablation, which performs worse than the U-LSTM ablation (Figure 5). The isolated initialization change (e.g. from LSTM to U-LSTM, or R-LSTM to UR-LSTM) provides substantial improvements on these tasks\", \"On language modeling, the U-LSTM alone improves over the SOTA LSTM baseline and matched the more specialized ON-LSTM baseline (Figure 6)\", \"Theoretically, we reiterate here why UGI intuitively helps\", \"The central principle of gated RNNs is that the values of gate activations control the timescales that the model can address. Section 2.2 defines the \\u201ccharacteristic timescale\\u201d, which implies that forget gate activations near 1.0 are necessary for long-term memory. Similar observations have been made before (Tallec et al. 2018).\", \"This phenomenon is also empirically supported in Figure 4, which shows that the methods which are able to solve the difficult memory task have more forget gate values near 1\", \"By initializing the activations uniformly throughout [0,1], UGI removes the bias hyperparameter and addresses a wider range of timescales including long-term dependencies. This explains the empirical improvements previously noted\", \"A more thorough discussion of the theoretical properties including how it affects the timescale distribution has been moved to Appendix B.2 and B.3\", \"*Contributions*\", \"Even disregarding UGI, we emphasize that the refine gate is a novel and theoretically justified contribution that can improve any gated model\", \"We have added pseudocode snippets of the proposed methods in Appendix A, which consist of only modifying 2 lines of code each. We again highlight that these small changes translate to broad empirical improvements\", \"Overall, due to the simplicity and principled nature of these modifications, and the ubiquity of \\u201cgates\\u201d in machine learning models, we believe that our contributions can be a valuable tool for practitioners.\", \"We have responded to other comments individually. We encourage the reviewers to look at the improved draft, and look forward to hearing further feedback on the submission.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes to improve the learnability of the gating mechanism in RNN by two modifications on the standard RNN structure, uniform gate initialization and refine gate. The authors give some propositions to show that the refine gate can maintain an effective forget effect within a larger range of timescale. The authors conduct experiments on four different tasks and compare the proposed modification with baseline methods.\", \"strong_points\": \"1. The authors propose a new refine structure that seems to have a longer \\\"memory\\\".\\n2. The authors designed a good synthetic experiment to demonstrate whether the proposed refine structure can help to remember information in longer sequence.\", \"weak_points\": \"1. There are several parts in the experiment that are not very convincing.\\n a. In Figure 3(a), where are the other baselines? Are they performing too badly so that they can not show up in the figure? It needs more explanation.\\n b. In Figure 3(b), actually a lot of methods are performing similar, while some methods converge similarly. What is the reason?\\n2. It is not defined why the uniform gate initialization works.\\n3. The proposed results actually not always perform the best. For instance, in Table 3, purely using the UR-LSTM only achieve good results on sMNIST. What is the reason? The proposed method seems not very general.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces and studies modifications to gating mechanisms in RNNs.\\nThe first modification is uniform gate initialization. Here the biases in the forget and input gate bias are sampled such that after the application of the sigmoid the values are in the range (1/d, 1-1/d) where d is the dimensionality of the hidden space for the bias. The second modification is the introduction of a refine gating mechanism with a view to allow for gradients to flow when the forget gates f_t in the LSTM updates are near saturation. The idea is to create an effective gate g = f_t +/- phi(f_t, x_t). The paper proposes using phi (f_t, x_t) = f_t(1-f_t) * (2*r_t-1) where (r_t is between 0 or 1). The effect of such a change is that g_t can reach values of 0.99 when the value of f_t is 0.9 allowing gradients to flow more freely through the parameters that constitute the forget gate. Overall the change corresponds to improving gradient flow for the forget gate by interpolating between f_t^2 and (1-f_t)^2. i.e. the authors note that the result of these changes is that it corresponds to sampling biases from a heavier tailed distribution while the refine gate (by allowing the forget gate to reach values close to 0 and 1), allows for capturing information on a much longer time scale.\\n\\n\\nThe paper studies various combinations of the two changes proposed to gating architectures. Other baselines include a vanilla LSTM, a Chrono initialized LSTM, and an ordered Neuron LSTM. The models are trained on several synthetic and real world tasks. On the copy and add tasks, the LSTMs that contain the refine gate converge the fastest. A similar story is observed on the task of pixel by pixel image classification. The refine gate was also adapted to memory architectures such as the DNC and RMA where it was found to improve performance on two different tasks.\\n\\nOverall, the paper is written well, I like the (second) idea of the refine gate and the contributions are explained in an accessible manner. While I'm not entirely convinced about the proposed initialization scheme but across the many different tasks tried, the use of the refine gate does appear to give performance improvements that lead me to conclude that this aspect of the work is a solid contribution to the literature.\", \"questions_and_comments\": [\"This manuscript already quite long and has several formatting issues. Several of the figures are unreadable when printed. For example, every piece of text on Figure 2(d) is unreadable on paper. Figure 3 and 5 are difficult to read; they contain too many alternatives with a colour scheme that makes it difficult to distinguish between them -- consider displaying a subset of the options via a plot and using a table to display (# steps to convergence) as a metric instead. It also appears as if the caption for Table 6 is deleted?\", \"I think that for this approach to work, two conditions need to be satisfied (a) there must be foreseeable improvements in the use of a forget gate that can reach values close to 0/1 for the task at hand and (b) r_t needs to function well despite not being too close to 0 or 1 (lest its parameters suffer from gradient flow issues).\", \"Was there any visualizations done on whether (a) happened? i.e. for the URLSTMs that performed well, were the values of the forget gate closer to 0/1 than the baselines?\", \"What were typical values of r_t, did the models need the refine gate to reach values close to 0 or 1 for the overall approach to work?\"]}",
"{\"comment\": \"This paper was previously desk-rejected by mistake. It has been placed back in the submission pool.\\n\\nBest,\\n\\nOpenReview Team.\", \"title\": \"Paper modification date updated\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces two novel techniques to help long term signal propagation in RNNs. One is an initialization strategy which uses inverse sigmoid function to avoid the decay of the contribution of the input earlier in time and another is a new design of a refine gate which pushes the value of the gate closer to 0 or 1. The authors conduct exhaustive ablation and empirical studies on copy task, sequential MNIST, language modeling and reinforcement learning.\\n\\nThough the experiment section is solid, I still vote for rejection for the following reasons:\\n\\n1. The writing in the description of the UGI and refine fate is not clear.\\na. The authors compares UGI to standard initialization but where is the standard initialization? I do not see \\\"standard initialization\\\" clearly defined in the paper.\\nb. I am not convinced how the UGI gate help avoid decaying of the input. There is a proposition 2 trying to explain some part of the mechanism of UGI. But the proposition is never proved anywhere and I am not sure why this proposition is important. More explanations are needed. Also this proposition is far away from the place the authors introduce the UGI. The authors may want to refer it in the place introducing UGI.\\nc. Similar to proposition 2, proposition 3 is not explained and proved in the paper. It is hard for me to analyze the importance of these two propositions. Overall, propositions 2 and 3 look isolated in the section.\\nd. Proposition 1 looks like a definition. Not sure why the authors name it as a proposition.\\n\\n2. Even though the title of the paper is \\\"improving the gating mechanism of recurrent neural networks\\\", the authors try to solve signal propagation problems. It is unclear why \\\"gate\\\" is important. Maybe other designs of the recurrent neural network can satisfy better the desiderata the authors want. Based on my limited knowledge, the initialization the authors mention (saturation) is exactly from the need of using a sigmoid gate. The importance of using \\\"gate\\\" should be discussed.\\n\\n3. The authors shrink the space before and after headings in the paper. I think this is not allowed in ICLR. It would be better that the authors correct the spacing in the revised version.\", \"minors\": \"1. page 1 second paragraph: repeated \\u201cproven\\u201d\\n2. page 1 second last paragraph: \\u201cdue\\u201d -> \\u201cdue to\\u201d \\n3. page 2 second last paragraph: repeated \\u201cbe\\u201d\\n4. page 2 Equation (4) and (5): using some symbols like \\\\odot for element wise multiplication will be good for the readers.\"}"
]
} |
BklhsgSFvB | Learning to Transfer via Modelling Multi-level Task Dependency | [
"Haonan Wang",
"Zhenbang Wu",
"Ziniu Hu",
"Yizhou Sun"
] | Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods. | [
"multi-task learning",
"attention mechanism"
] | Reject | https://openreview.net/pdf?id=BklhsgSFvB | https://openreview.net/forum?id=BklhsgSFvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"M6zhdcZw4d",
"rklspDjooH",
"Hyl4LLisjr",
"Bkg2ISjisr",
"ryxHumoisB",
"r1xqvc3uqr",
"ByxJhPKOqS",
"SyluxUeL5r",
"SJxJYVNg9S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750916,
1573791683512,
1573791308349,
1573791059776,
1573790572681,
1572551265854,
1572538279468,
1572369904368,
1571992694777
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2515/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2515/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2515/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2515/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2515/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"In this work, the authors address a multi-task learning setting and propose to enhance the estimation of task dependency with an attention mechanism capturing sample-dependant measure of task relatedness. All reviewers and AC agree that the current manuscript lacks clarity and convincing empirical evaluations that clearly show the benefits of the proposed approach w.r.t. state-of-the-art methods. Specifically, the reviewers raised several important concerns that were viewed by AC as critical issues:\\n(1) the empirical evaluations need to be significantly strengthened to show the benefits of the proposed methods over SOTA -- see R2\\u2019s request to empirically compare with the related recent work [Taskonomy, 2018] and R4\\u2019s request to compare with the work [End-to-end multi-task learning with attention, 2018]. R4 also suggested to include an ablation study to assess the benefits of the attention mechanism. Pleased to report that the authors addressed the ablation study in their rebuttal and confirmed that the proposed attention mechanism plays an important role in the performance of the proposed method. \\n(2) All reviewers see an issue with the presentation clarity of the conceptual and technical contributions -- see R4\\u2019s and R2\\u2019s detailed comments and questions regarding technical contributions; see R3\\u2019s and R4\\u2019s comments that the distinction between the general task dependency and the data-driven dependency is either not significant or is not clearly articulated; finding better examples to illustrate the difference (instead of reiterating the current ones) would strengthen the clarity and conceptual contributions. \\nA general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs more clarifications, empirical studies and polish to achieve the desired goal.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for your constructive comments.\", \"comment\": \"Thank you for your feedback. We will start by emphasizing the distinctions between our work and previous works, and then address your concerns.\\n\\nFirst, we'd like to emphasize the distinctions between our work and previous works [1][2][3].\\n\\n(1) Our work can capture both the general task and data-specific task dependency in \\u201cdiscrete\\u201d data (i.e. text and graph). The general task dependency is the same as [1]. However, for text and graph data, it is not enough to simply use the general task dependency to guide the knowledge transfer between tasks. The reason is that different from the image which [1] focuses on, text and graph data are hierarchical: word -> sentence and node -> graph. The task dependency at the basic level (word and node) may be different from the general task dependency. \\n\\nTake sentence classification as an example, words like \\u201cgood\\u201d or \\u201cbad\\u201d may transfer more knowledge from sentiment analysis tasks, while words like \\u201cbecause\\u201d and \\u201cso\\u201d may transfer more from discourse relation identification task. \\n\\nOur work can capture the task dependency at the basic level (word and node). An extreme case would be each word/node has the same task dependency, in which our model will perform as well as [1].\\n\\n(2) We propose a decomposition method to reduce the size of the parameters from $O(T^2)$ to $O(T)$ (T is the number of tasks). While [2] is also capable to model the task-dependency at the word level, it suffers from quadratic complexity. [2] uses a d x d matrix $W_{sj}$ (d is the dimension of the representation for each word/node) to model the dependency between the source (s) and target task (j). However, when the number of tasks grows, the number of dependency matrix will grow quadratically ($O(T^2)$). To alleviate this, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from (eq 2).\\n\\n(3) Our work enables the interaction between tasks. While [3] is also able to learn the task-specific representations at different levels, there is no interaction between tasks. [3] uses a shared network to learn the task-shared representations, and T task-specific attention networks to learn the task-specific representations. However, there is no relation between tasks, and each task can only utilize the shared f representation satures from the shared network. In this case, if the tasks are not mutually strong related, [3] will suffer since the shared representations may inherently different.\\n\\nThe aforementioned distinction guarantees that our approach has great potential to obtain better performance.\\n\\nThen, we will address your concerns below.\", \"q1\": \"\\u201c...definition and extraction method should be discussed\\u201d\", \"r1\": \"The general task dependency is a learnable T x T matrix (T is the number of tasks). The element at index (i, j) represents the transferable weight from task i to task j. Note that the dependency graph is asymmetrical. In this way, the negative influence of irrelevant tasks can be reduced as much as possible.\", \"q2\": \"\\u201cPosition-wise Mutual Attention Mechanism\\u201d\", \"r2\": \"Apologies for being unclear in this part of our paper. This part is our key contribution and we have added more details to it in the paper.\\n\\nWe have addressed all remaining minor suggestions in the paper.\\n\\n\\n[1] Taskonomy: Disentangling Task Transfer Learning, 2018\\n[2] Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification, 2017\\n[3] End-to-End Multi-Task Learning with Attention, 2018\"}",
"{\"title\": \"Thank you for your constructive comments.\", \"comment\": \"Thank you for your feedback. We will start by emphasizing the distinctions between our work and previous works, and then address your concerns.\\n\\nFirst, we'd like to emphasize the distinctions between our work and previous works [1][2][3].\\n\\n(1) Our work can capture both the general task and data-specific task dependency in \\u201cdiscrete\\u201d data (i.e. text and graph). The general task dependency is the same as [1]. However, for text and graph data, it is not enough to simply use the general task dependency to guide the knowledge transfer between tasks. The reason is that different from the image which [1] focuses on, text and graph data are hierarchical: word -> sentence and node -> graph. The task dependency at the basic level (word and node) may be different from the general task dependency.\\n\\nTake sentence classification as an example, words like \\u201cgood\\u201d or \\u201cbad\\u201d may transfer more knowledge from sentiment analysis tasks, while words like \\u201cbecause\\u201d and \\u201cso\\u201d may transfer more from discourse relation identification task. \\n\\nOur work can capture the task dependency at the basic level (word and node). An extreme case would be each word/node has the same task dependency, in which our model will perform as well as [1].\\n\\n(2) We propose a decomposition method to reduce the size of the parameters from $O(T^2)$ to $O(T)$ (T is the number of tasks). While [2] is also capable to model the task-dependency at the word level, it suffers from quadratic complexity. [2] uses a d x d matrix $W_{sj}$ (d is the dimension of the representation for each word/node) to model the dependency between the source (s) and target task (j). However, when the number of tasks grows, the number of dependency matrix will grow quadratically ($O(T^2)$). To alleviate this, we develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from (eq 2).\\n\\n(3) Our work enables the interaction between tasks. While [3] is also able to learn the task-specific representation at different levels, there is no interaction between tasks. [3] uses a shared network to learn the task-shared representations, and T task-specific attention networks to learn the task-specific representations. However, there is no relation between tasks, and each task can only utilize the shared representations from the shared network. In this case, if the tasks are not mutually strong related, [3] will suffer since the shared representations may inherently different.\\n\\nThe aforementioned distinction guarantees that our approach has great potential to obtain better performance.\\n\\nThen, we will address your concerns below.\", \"q1\": \"\\\"most state-of-the-art multi-task learning models can learn task dependency via different forms\\u201d\", \"r1\": \"Apologies for the misunderstanding. Most state-of-the-art multi-task learning models can indeed learn task dependency via different forms. However, what we want to claim here is that our model is more robust since we can model both the general task dependency (same as several previous works) and the data-specific task dependency. We have made clarification on this in the paper.\", \"q2\": \"\\u201c\\u2026leads to a large number of model parameters especially when there are a large number of tasks\\\"\", \"r2\": \"Undeniably, our model does require a significant amount of model parameters (which are also the cases for several other multi-task learning models [1][4]. However, this does not necessarily mean that our model will suffer even when each task has a limited number of labeled samples. By modeling the multi-level task dependency (graph/text level and node/word level), our model can better utilize the task dependency information and the inner structural information from data, which increases the data efficiency. As shown in table 1&2, our model outperforms the other models under the low labeled ratio setting.\\n\\nFurther, we are currently performing experiments on a model that uses both shared and task-specific encoder to reduce the number of parameters while maintaining the same performance. We will add the experimental results to the full version.\\n\\n\\n[1] Taskonomy: Disentangling Task Transfer Learning, 2018\\n[2] Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification, 2017\\n[3] End-to-End Multi-Task Learning with Attention, 2018\\n[4] Cross-stitch Networks for Multi-task Learning, 2016\"}",
"{\"title\": \"Thank you for your constructive comments.\", \"comment\": \"Thank you for acknowledging the novelty of this work and for the suggestions. Apologies for being unclear in these parts of our paper, we address your questions below.\", \"q1a\": \"\\u201c...stronger motivation for the proposed approach..\\u201d\", \"r1a\": \"The motivation for the multi-level task dependency is the hierarchical structure in text and graph data (i.e. word -> sentence and node -> graph). The task dependency at word/node level may be different from the general task dependency.\\n\\nTake sentence classification as an example, besides the general relationship between sentiment analysis tasks and discourse relation identification tasks, words like \\u201cgood\\u201d or \\u201cbad\\u201d may transfer more knowledge from sentiment analysis tasks, while words like \\u201cbecause\\u201d and \\u201cso\\u201d may transfer more from discourse relation identification tasks. \\n\\nPrevious work [1] can only capture the general task dependency but does not utilize the inner hierarchical structure of \\u201cdiscrete\\u201d data (text and graph). An extreme case would be each word/node has the same task dependency, in which our model will perform as well as [1].\", \"q1b\": \"\\\"what kind of mappings are used in (2)?\\u201d\", \"r1b\": \"Apologies for being unclear in this part of our paper. Previously, the mapping function uses a d x d matrix $W_{ij}$ to map the representation from task i to j. However, this requires $O(T^2)$ mapping functions (T is the number of tasks). Thus, we decompose the mapping matrix $W_{ij}$ to $S_{i}T_{j}^{T}$ where $S$ and $T$ are two d x d\\u2019 matrixes. By this, the space complexity is reduced to $O(T)$. More details have been added to the paper.\", \"q2a\": \"\\\"In the evaluation procedure, could the same input X appear both in the training and the test data sets (but in different tasks)?\\u201d\", \"r2a\": \"No, an input will either be in the training set (80%) or the testing set (20%) but not both. We follow the same experimental setup as [2][3].\", \"q2b\": \"\\u201c...adding to the evaluation a method which is equivalent to the proposed one, but doesn't involve attention (i.e. only uses D).\\\"\", \"r2b\": \"Thanks for your suggestion! We added this method to the evaluation. The result shows that without attention (i.e. only uses D), the performance is similar to the cross-stitch model which pre-defined the task dependency matrix D. This is expected since both methods model the general task dependency either in a pre-defined or learnable way. We will also add this part to the full version of our paper.\", \"q3\": \"\\u201cAdditional comments\\u201d\", \"r3\": \"Corrected.\\n\\nWe have re-structures the paper to improve clarity. We have also added more details in the motivation and additional evaluation in the experiment.\\n\\n\\n[1] Taskonomy: Disentangling Task Transfer Learning, 2018\\n[2] Cross-stitch Networks for Multi-task Learning, 2016\\n[3] Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts, 2018\"}",
"{\"title\": \"Thank you for your constructive comments.\", \"comment\": \"Thank you for your constructive comments. We address your questions as follows.\", \"q1\": \"\\u201cIt was not thoroughly reviewed by the authors for grammar.\\u201d\", \"r1\": \"We apologize for the grammar mistakes. We have carefully revised the paper and also re-scrutinized to improve the language.\", \"q2\": \"\\u201cPerhaps more clarity on the difference and contribution of each \\\"level\\\" would make the significance stand out clearer.\\u201d\", \"r2\": \"The motivation for the multi-level task dependency is the hierarchical structure in text and graph data (i.e. word -> sentence and node -> graph). The task dependency at word/node level may be different from the general task dependency.\\n\\nTake sentence classification as an example, besides the general relationship between sentiment analysis tasks and discourse relation identification tasks, words like \\u201cgood\\u201d or \\u201cbad\\u201d may transfer more knowledge from sentiment analysis tasks, while words like \\u201cbecause\\u201d and \\u201cso\\u201d may transfer more from discourse relation identification tasks. \\n\\nPrevious work [1] can only capture the general task dependency but does not utilize the inner hierarchical structure of \\u201cdiscrete\\u201d data (text and graph). An extreme case would be each word/node has the same task dependency, in which our model will perform as well as [1].\\n\\n\\n[1] Taskonomy: Disentangling Task Transfer Learning, 2018\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper is on an improvement of multi-task learning by considering the input tasks at two levels: (1) at task level, i.e. the relationship between the tasks and (2) by the data associated with each task. Their major argument is that most current methods hold the assumption that the tasks are correlated with each other but they conjecture that in the real-world this is not necessarily true and try to model the relationship between the input tasks at these two levels and incorporate that in the learning framework. To show effectiveness of their approach they test their method on differently oriented public datasets representing graphs, nodes and text and compare performance with some of the recent approaches to multi-task learning.\\n\\nComments to authors\\n1. Overall while one could get the gist of the arguments in the paper, it was not thoroughly reviewed by the authors for grammar, so it was hard to follow the finer points of the arguments. There are several grammatical mistakes and errors, on every page, it'd be too cumbersome to point them all out.\\n2. The distinction between the \\\"general task dependency\\\" and the \\\"data dependency\\\" does not seem significant enough. The data-dependent task dependency actually depends on the \\\"general task dependency\\\" as stated in the paper. This is probably manifested in the relatively slight improvement of the method compared with the SOTA. Perhaps more clarity on the difference and contribution of each \\\"level\\\" would make the significance stand out clearer.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose a multi-task learning method that uses attention mechanism to identify relations between the tasks.\", \"method\": [\"The authors motivate the use of attention mechanism for identifying a sample-dependent measure of task relatedness by that \\\"task dependency can be different for different data samples..\\\" At the introduction stage this argument was not clear to me. I would suggest to expand this part of the paper by providing a stronger motivation for the proposed approach.\", \"what kind of mappings are used in (2)?\"], \"experiments\": [\"it seems that all the datasets used are for multi-label learning. Thus, in the evaluation procedure, could the same input X appear both in the training and the test data sets (but in different tasks)? If yes, I believe it might make the evaluation less thorough. In either case it would be helpful to have this information in the description of the setting\", \"since use of attention is the main contribution of this work, but not the only part of the method, I would recommend adding to the evaluation a method which is equivalent to the proposed one, but doesn't involve attention (i.e. only uses D).\"], \"additional_comments\": [\"in its current form the manuscript is rather hard to follow, it requires a thorough proof-reading\", \"it is unclear what Figure 1 on page 2 is for\", \"on page 2 phrase \\\"... the label ratio is imbalanced.\\\" is confusing. I believe the authors meant that the data (not label) proportions between the tasks are uneven\", \"on page 3 the authors say that minimisation of the empirical risk (eq. (1)) is \\\"the goal of multi-task learning\\\". This sentence needs rephrasing, because from the point of view of empirical risk minimisation any multi-task approach is worse than the corresponding single-task version (i.e. its empirical risk is higher). Only in terms of the generalization performance one can argue that information sharing is beneficial.\", \"notation in (3) is confusing - index i is used in two meanings\", \"it's unclear what k in eq. (4) is\", \"it seems that a few references are broken\", \"To my knowledge the idea of making amount of transfer between tasks dependent on the particular sample at hand is new. Therefore, in my opinion, with improved presentation (and in particular motivation at the beginning of the manuscript) and additional evaluation demonstrating effects of the attention component the manuscript could be recommended for acceptance.\", \"---------------------------------------------------------------\", \"I thank the authors for their comments. The quality fo the manuscript has indeed improved and the differences with the existing methods are clearer. However, in light of the reviewers' comments, I agree that at least the experimental section needs to be extended by adding relevant baselines. In particular, comparison to \\\"End-to-end multi-task learning with attention\\\" is needed to demonstrate importance of the task-level dependence measure. If direct comparison is not possible (or in addition to it) I would suggest to evaluate a modification of L2MITTEN with matrix D being equal to all 1s.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a \\u2018Learning to Transfer via Modeling Multi-level Task Dependency\\u2019 for multi-task learning\\u2019, which uses the attention mechanism to learn task dependency.\\n\\nIn the introduction, authors claim \\u2018most of the current multitask learning framework rely on the assumption that all the tasks are highly correlated\\u2019. I don\\u2019t think this claim is correct. In fact, most state-of-the-art multi-task learning models can learn task dependency via different forms.\\n\\nIn the proposed network, different tasks have their own encoder, which leads to a large number of model parameters especially when there are a large number of tasks. This situation becomes even worse when each task has a limited number of labeled samples.\\n\\nThe attention has been used in multi-task learning. Authors can google \\u2018multi-task learning attention\\u2019 to find related works. Of course authors need to compare with those related works.\", \"a_typo\": \"\\u201cis theposition-wise mutual attention between\\u201d\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The submission argues for the modeling the relationships between different tasks and incorporating such relationships when training multi-task frameworks. Though the basic concept (usefulness of modeling and incorporating the relationships among tasks) is valid, the submission has a number of critical issues, namely missing prior work that did that that already, missing critical specifics of the method, and unnecessary mix of different concepts.\", \"elaborated_comments\": \"A) Authors seem to be unaware of critically related prior work that specially modeled task relationships and did much of what's proposed in this submission, especially \\\"Taskonomy: Disentangling task transfer learning\\\". The \\\"relationship among tasks\\\" that this submission frequently talks about is the main concept in taskonomy 2018 paper (see their abstract). Besides the apparent similarities (eg the fig 1 of this submission vs fig 1&2 of taskonomy or fig 4 of this submission vs fig 13&7 of taskonomy), the formulation has strong similarities too (transferring from \\\"task-specific\\\" encoders of source tasks to target tasks using transfer readout functions, or ensembling multiple task-specific representations which seem to be the same as taskonomy's higher order transfer). This submission should be majorly revised in light of prior work and the critically relevant ones should be discussed and experimentally compared to. \\n\\nB) The presentation suffers from missing critical specifics. For instance, the \\\"general task dependency\\\" matrix shown in Fig 3 and mentioned in page 4, which seem to be the same concept as taskonomy's task affinity matrix, is only mentioned in passing. While that seem to be one of the most important components of the method and its definition and extraction method should be discussed. \\n\\nC) Inline with the point B above, the presentation of the \\\"Transfer Block\\\" and what the authors refer to as \\\"Point-wise Mutual Attention Mechanism\\\" has issues and missing details. This block could potentially have new points in it, but it's not feasible to judge that and its technical correctness given the current disposition. For instance eq 2 seem to suggest the authors develop a universal representation space where all task-specific representations get mapped to and all target tasks can be inferred from (to reduce T^2 complexity to 2T). The rest of the section does not provide a clear implementation of this and add mathematical/notation confusions. Eg H_i_j is defined to be the task-specific representation of the source task i but is indexed over both tasks i and j where j is the target, or there is a E_j(X_j) where both indexes are j while E's index is over tasks and X's index is over datapoints. \\n\\nSimilarly the submission seem to jump over certain concepts/terms e.g. \\\"multi-view task dependency\\\" in page 4 vs\\\"multi-level task dependency\\\" in the title, etc. What exactly \\\"view\\\" or \\\"level\\\" mean here? Are those phrases really needed? Dropping any loosely grounded phrase would be a useful practice toward a clearer presentation.\\n\\nOverall, unfortunately the submission suffers from serious issues in its current shape. \\n\\n\\n----\", \"comments_after_rebuttal_stage\": \"Thanks to the authors for the rebuttal. It provided some help, but unfortunately it doesn't resolve the majority of the issues, as most of them are too major. A clear discussion on how the proposal is different and why it is better than the recent works that were not cited would be needed, and likely authors needed strong experimental comparison with some of them, eg to prove both general and data-specific task dependency is needed. \\n\\nI also didn't find the hierarchical justification clear or convincing \\\"The reason is that different from the image which [1] focuses on, text and graph data are hierarchical: word -> sentence and node -> graph. The task dependency at the basic level (word and node) may be different from the general task dependency\\\".\"}"
]
} |
rJx2slSKDS | Latent Variables on Spheres for Sampling and Inference | [
"Deli Zhao",
"Jiapeng Zhu",
"Bo Zhang"
] | Variational inference is a fundamental problem in Variational AutoEncoder (VAE). The optimization with lower bound of marginal log-likelihood results in the distribution of latent variables approximate to a given prior probability, which is the dilemma of employing VAE to solve real-world problems. By virtue of high-dimensional geometry, we propose a very simple algorithm completely different from existing ones to alleviate the variational inference in VAE. We analyze the unique characteristics of random variables on spheres in high dimensions and prove that Wasserstein distance between two arbitrary data sets randomly drawn from a sphere are nearly identical when the dimension is sufficiently large. Based on our theory, a novel algorithm for distribution-robust sampling is devised. Moreover, we reform the latent space of VAE by constraining latent variables on the sphere, thus freeing VAE from the approximate optimization of posterior probability via variational inference. The new algorithm is named Spherical AutoEncoder (SAE). Extensive experiments by sampling and inference tasks validate our theoretical analysis and the superiority of SAE. | [
"variational autoencoder",
"generative adversarial network"
] | Reject | https://openreview.net/pdf?id=rJx2slSKDS | https://openreview.net/forum?id=rJx2slSKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"z_GQckZkpW",
"8l3nr14lO",
"Hketwk83jH",
"Byek4o59or",
"HylUhjk4iB",
"BJl_-qJEiH",
"B1xPFFJ4jH",
"BJeMRX6atB",
"H1e_Hdw6tr",
"HJga0cRjYr",
"r1lf20oUFB",
"S1gC86mHYr"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1705498772463,
1576798750887,
1573834592654,
1573722919248,
1573284781579,
1573284352289,
1573284223276,
1571832778138,
1571809344264,
1571707605009,
1571368618368,
1571269973875
],
"note_signatures": [
[
"~Alokendu_Mazumder1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2514/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2514/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2514/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2514/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2514/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2514/Authors"
],
[
"~Alex_Matthew_Lamb1"
]
],
"structured_content_str": [
"{\"title\": \"Inference Results\", \"comment\": \"Happy New Year and Hope you're well!\", \"i_have_a_question_regarding_the_understanding_of_your_model_at_the_inference_stage\": \"1. During reconstruction, the un-normalized latent vector is fed directly to the decoder?\\n\\nThanks and Best Regards,\\n\\nAlokendu Mazumder\\nPhD Scholar\\nIndian Institute of Science, Bengaluru, India\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to improve VAE/GAN by performing variational inference with a constraint that the latent variables lie on a sphere. The reviewers find some technical issues with the paper (R3's comment regarding theorem 3). They also found that the method is not motivated well, and the paper is not convincing. Based on this feedback, I recommend to reject the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to the rebuttal\", \"comment\": [\"Thank you for your response.\", \"I went through the rebuttal and the revised version of the paper, and most of my original concerns remain unaddressed:\", \"The positioning of the paper with respect to VAE, variational inference is confusing and even misleading.\", \"Attributing generation issues in VAE to the fact that \\u201cthe posterior q(z|x) is incapable of matching the prior distribution p(z) well\\u201d is not correct. In VAE we are not expecting q(z|x) to match p(z) well as this would result in useless inference (q(z|x) ignoring x). Please refer to my original comments for more details. In their answer A3, the authors mentioned about Wasserstein autoencoder (WAE). I would like to emphasize that in WAE the objective is to match the \\u201caggregated\\u201d posterior q(z) with the prior p(z), where q(z) = \\\\int q(x)q(z|x)dx, with q(x) denoting the empirical data distribution. Indeed, we want q(z), but not q(z|x), to perfectly match p(z).\", \"Assumptions under which Theorem 2 holds should be stated clearly. The sampling at random assumption is not enough if z, z\\u2019 are from empirical distributions on the sphere with overlapping supports.\", \"While the result of Theorem 2 may justify \\u201cdistribution robust-sampling\\u201d, it is not clear how and why would this Theorem justify improvement in inference when projecting z onto a hypersphere.\", \"The revised version includes qualitative results for VAE and the proposed SAE on a new dataset (CelebA), as well as for hyper-Spherical VAE on MNIST. However, quantitative experiments remain weak.\", \"The writing has been revised making it better than the initial version, yet the paper is still hard to follow, and further improvements are necessary.\"]}",
"{\"title\": \"About the revised version\", \"comment\": \"The revised version has been updated. We revised the submission from the following eight aspects according to Reviewers' advice.\\n\\n1. We explicitly wrote the objective function of SAE in equation (11). The reconstruction loss and the spherical constraint in equations (10) and (11) are all operations in SAE. There are no variational inference, no probabilistic optimization, and no priors involved in SAE during training.\\n\\n2. To make the meaning of the word \\\"inference\\\" clear, we named the inference in SAE as the spherical inference. As opposed to the variational inference, the spherical inference \\u00a0is deterministic during training. But the decoder of SAE is rather robust to various priors for sampling after training. We made this expression clear in the paper to avoid misunderstanding with the variational inference.\\n\\n3. We added the discussion for Wasserstein autoencoder, adversarial autoencoder, and beta-VAE in the related work.\\n\\n4. We also compared VAE and SAE on CelebA. \\u00a0We need to note that the quality of generated faces by VAE and SAE will be both improved if we use the face images of only cropping facial parts for the experiments. But such data are not sufficient to test the robustness of the algorithms against the variations of the entire facial features.\\n\\n5. We compared hyper-Spherical VAE (S-VAE) with SAE on MNIST using the official code at https://github.com/nicola-decao/s-vae-tf.\\n\\n6. We provided the visualization results of latent codes from VAE, S-VAE, and SAE on CelebA and MNIST. This visualization clearly shows the superiority of the spherical inference in SAE.\\n\\n7. We re-arranged images in Figure 5 to save more space for the new contents. The experimental results in this Figure are kept the same as the previous version.\\n\\n8. We corrected the typos, polished the writing, \\u00a0and made the paper more readable.\\n\\nAll the complementary experimental results were attached in Appendix.\"}",
"{\"title\": \"To Reviewer #1\", \"comment\": \"\", \"q1\": \"\\u201cis this a variant of the Wasserstein auto-encoder?\\u201d\", \"a1\": \"Our SAE algorithm is not a variant of WAE proposed in the following paper.\\n\\nWasserstein Auto-Encoders\", \"https\": \"//arxiv.org/abs/1511.05644\\n\\nLike VAE, both Wasserstein autoencoder and adversarial autoencoder need a prior distribution to match. However, there is no loss imposed on the latent space to optimize for SAE. SAE does not need priors either. The object function is the reconstruction loss || x - \\\\tilde{x} || with the spherical constraint shown in equation (10). It is much simpler than Wasserstein autoencoder.\\n\\nIn order to elucidate the unique property of random variables on spheres, we leveraged Wasserstein distance to derive Theorem 2. The Wasserstein distance here serves to establish a circumstance that the algorithm with the spherical constraint can be distribution-agnostic. We did not use Wasserstein distance for computation in SAE. \\n\\nBoth Wasserstein autoencoder and adversarial autoencoder are very interesting and inspiring algorithms. We like these two works very much.\", \"q2\": \"\\u201cthe image quality of VAE (CelebA) is not that bad in other VAE papers, maybe tuning the \\\\beta-VAE can also achieve the same quantitative and qualitative results.\\u201d\", \"q3\": \"\\u201cCan you visualize the latent space (z) for the CelebA dataset, also comparing with the results from VAE?\\u201d\", \"a2\": \"For data factors, the image quality of VAE depends on the image size and the image diversity. For face images, the large image size and more backgrounds in the image will make the data difficult to fit. We used the more challenging data of FFHQ available at https://github.com/NVlabs/stylegan and the image size we used is 128x128.\", \"a3\": \"We are conducting the experiment on CelebA of size 64x64 according to your advice. We will update the results when this complementary experiment is completed.\"}",
"{\"title\": \"To Reviewer #3\", \"comment\": \"\", \"q1\": \"\\u201cAn important claim in this paper is that the proposed approach \\u201calleviates variational inference in VAE\\u201d. However, this requires clarification as well as theoretical/empirical justifications\\u201d\", \"q2\": \"\\u201cMoreover, why and how would Theorem 2 justify improved inference when projecting latent samples onto a hypersphere?\\u201d\", \"a1_and_a2\": \"These two questions and related comments might be due to our inappropriate use of \\u201calleviate\\u201d and the extensive meaning of inference beyond probability. Actually, there is no posterior inference and any priors involved in our SAE algorithm. It is the vanilla autoencoder subject to the spherical constraint shown in equation (10). So we said \\u201cthus freeing VAE from the approximate optimization of posterior probability via variational inference\\u201d and \\u201cOur algorithm is geometric and free from posterior probability optimization\\u201d. Indeed, \\u201calleviates variational inference in VAE\\u201d is an inappropriate use in this scenario. We will correct this in the revised version.\\n\\nBesides, we use \\u201cinference\\u201d to refer to inferring (obtaining) z from the encoder, not only for \\u201cvariational\\u201d inference or \\u201cprobabilistic\\u201d inference. This might cause misunderstanding with habitual thinking in this field. This misunderstanding might be avoided by using \\u201cgeometric inference\\u201d. We will note this meaning clearly in the revised version.\", \"q3\": \"\\u201cHowever, in VAE we do not expect the posterior to match the prior perfectly, as this would result in useless data representations or inference.\\u201d\", \"a3\": \"We understand your viewpoint about the model distribution and the prior distribution . \\u201cmatch the prior perfectly\\u201d does not mean the point-to-point correspondence. We refer to fitting distributions. The word \\u201cmatch\\u201d is also used in Wasserstein autoencoder (https://arxiv.org/abs/1711.01558), which is the same scenario to ours.\", \"q4\": \"\\u201cTheorem 2 (on the convergence of the Wasserstein distance (W2) on high dimensional hyperspheres) does not seem to hold if, for instance, P and P\\u2019 are empirical distributions with overlapping supports.\\u201d\", \"q5\": \"\\u201cFurther, even when the above Theorem holds, the W2 distance may be relatively high since it is proportional to the square root of the number of samples.\\u201d\", \"a4\": \"To make our theory much easier to understand, we directly gave the computational definition of Wasserstein distance in (8) and (9) rather than its original integral form. Thus, Theorem 2 is the direct result by substituting the conclusion of Lemma 1 into (8). It is very easy. About the correctness of Lemma 1, please refer to the elegant proof at http://faculty.madisoncollege.edu/alehnen/sphere/hypers.htm.\\n\\nMost theorems only hold under some conditions. Both Lemma 1 and Theorem 2 need a basic condition. The condition is that the points are drawn from spheres at RANDOM. To satisfy the condition, we use the operation of centerization in our SAE algorithm, which is motivated from central limit theorem in probability. \\n\\nIn fact, it is straightforward to design the case to deny Lemma 1 and Theorem 2 if we bypass the condition. For instance, let Z1 be the set sampled from the spherical part in the open positive orthant and Z2 sampled from the spherical part in the open negative orthant. The third set Z3 is derived from Z2 by the small perturbation. Both Lemma 1 and Theorem 2 do not hold for the dataset { Z1, Z2, Z3}. But such samping violates the randomness needed. For SAE, the centerization is used to prevent such cases.\", \"a5\": \"\\u201cthe W2 distance may be relatively high since it is proportional to the square root of the number of samples.\\u201d is correct. However, it is logically wrong to use it to deny our theory, because all the W2 distances between two arbitrary random datasets still converge to be the same constant in Theorem 2 when the number of samples increases. The conclusion still holds in our paper.\", \"q6\": \"\\u201cImprove experiments by including more datasets and baselines (e.g., hyperspherical VAE [1]), as well conduct more targeted experiments to give more insights regarding the effect of the L2 normalization on inference and generation. \\u201d\", \"a6\": \"We failed to get the convergent results of hyper-Spherical VAE (S-VAE) on FFHQ faces of size 128x128. So we did not compare it in the current version. We are now running it on MNIST. The results will be updated in the revised version within several days.\"}",
"{\"title\": \"To Reviewer #2\", \"comment\": \"\", \"q1\": \"\\u201cThen, to clarify the algorithm, it seems necessary to provide the formulation of objective functions.\\u201d, \\u201cIs the objective still valid or reasonable even it is derived from the equation (10) without posterior inference?\\u201d\", \"a1\": \"The objective function will be provided in the revised version. It is the reconstruction loss || x - \\\\tilde{x} || subject to the spherical constraint on z (equation (10)). There are no posterior inference and no KL-divergence involved in our algorithm. It is very simple.\", \"q2\": \"\\u201cHow does the objective change when centerization and spherization are applied to the GAN?\\u201d\", \"a2\": \"There is no extra objective when applied to GANs. Only centerization and spherization are needed.\", \"q3\": \"\\u201cCompared with using von Mises-Fisher distribution in the vanilla VAE, the advantage of the proposed method is not clear. To my understanding, the main difference seems to be whether using lower bound with posterior inference or deterministic framework without such approximation. However, there are no theoretical or empirical results to show the benefit of the proposed method.\\u201d\", \"a3\": \"This might be the misunderstanding caused by that we didn\\u2019t explicitly write the objective function in the paper. We explain this in Q1. Our SAE algorithm is essentially different from S-VAE (hyper-Spherical VAE). The S-VAE is established on the principle of VAE. So, S-VAE has the drawbacks posed by VAE such as the approximation of posterior inference, the prior dependence, and the reparameterization trick for random variables. But SAE is distribution-agnostic with respect to Wasserstein distance, which is rigorously guaranteed by Theorem 2.\\n\\nActually, we failed to get the convergent results of S-VAE on FFHQ faces of size 128x128. We are now running it on MNIST. The results will be updated in the revised version within several days.\", \"q4\": \"\\u201cWhat dimension do you use as latent dimension in the experiments?\\u201d\", \"a4\": \"We followed the experimental setting of StyleGAN. The 512-dimensional latent vectors are used for StyleGAN, VAE, and SAE on the face datasets including FFHQ and CelebA. For MNIST, we take the 10-dimensional latent codes.\", \"q5\": \"\\u201cDoes the choice of prior distribution affect the experimental results? If so, is there any compatible reason with the intuition of SAE?\\u201d\", \"a5\": \"This question might be another misunderstanding caused by Q1. Actually, there are no any priors involved in SAE during training. We used different priors to test the robustness of SAE and VAE after training was completed. We will make it clear in the revised version.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel autoencoder algorithm, named Spherical AutoEncoder (SAE). In this paper, the authors argue that the sphere structure has good properties in high-dimensional. To leverage the properties, proposed algorithm centerizes latent variables and projects them onto unit sphere. To show the empirical performance of the proposed approach, the authors perform image reconstruction and generation using FFHQ dataset and MNIST dataset.\", \"comments\": [\"I think the proposed approach, using spherical latent space, is interesting and make sense.\", \"As mentioned in section 3.2, the proposed algorithm is reduced to standard autoencoder since it is free from posterior inference. Then, to clarify the algorithm, it seems necessary to provide the formulation of objective functions.\", \"Is the objective still valid or reasonable even it is derived from the equation (10) without posterior inference?\", \"How does the objective change when centerization and spherization are applied to the GAN?\", \"Compared with using von Mises-Fisher distribution in the vanilla VAE, the advantage of the proposed method is not clear. To my understanding, the main difference seems to be whether using lower bound with posterior inference or deterministic framework without such approximation. However, there are no theoretical or empirical results to show the benefit of the proposed method. If theoretical or empirical results with reasonable intuition is provided, it will make the proposed algorithm more valuable.\"], \"questions\": [\"Compare to ProGAN and StyleGAN, is the contribution of the paper to applying centerization to GAN and centerization and spherization to autoencoder?\", \"What dimension do you use as latent dimension in the experiments?\", \"Does the choice of prior distribution affect the experimental results? If so, is there any compatible reason with the intuition of SAE?\"], \"typo\": \"Under equation (10) in page 5: \\\\tilde{z} should be \\\\hat{z}.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary\", \"This paper considers the L2 normalization of samples \\u201cz\\u201d from a given prior p(z) in Generative Adversarial Netowks (GAN) and autoencoders. The L2 normalization corresponds to projecting samples onto the surface of a unit-hypersphere. Hence, to attempt to justify this normalization, the authors rely on some already established results regarding high dimensional hyperspheres. In particular, the focus is on the fact that, the Euclidean distance between any given point on a hypersphere and another randomly sampled point on the hypersphere tends to a constant, when the number of dimensions goes to infinity. This result is then used to show that the Wasserstein distance between two arbitrary distributions on a hypersphere converges to a constant when the number of dimensions grows. Based on this result, the authors claim that projecting the latent samples onto the surface of a hypersphere would make GAN less sensitive to the choice of the prior distribution. Moreover, they claim that such normalization would also benefits inference, and that it addresses the issue of variational inference in VAE.\", \"Main comments.\", \"This paper is hard to follow and requires substantial improvements in terms of writing, owing to several grammatical and semantic issues. Moreover, there is a lack rigor; some important claims are supported neither by experiments nor by theoretical analysis. Experiments in the main paper are also weak. I can therefore not recommend acceptance. My detailed comments are below.\", \"An important claim in this paper is that the proposed approach \\u201calleviates variational inference in VAE\\u201d. However, this requires clarification as well as theoretical/empirical justifications.\", \"In the introduction, it is stated that generated samples from VAE may deviate from real data samples, because \\u201cthe posterior q(z|x) cannot match the prior p(z) perfectly\\u201d. However, in VAE we do not expect the posterior to match the prior perfectly, as this would result in useless data representations or inference. Generation issues in VAE may rather be explained by the fact that, in this context we optimize a lower bound on the KL-divergence between the empirical data distribution and the model distribution. The latter objective does not penalize the model distribution if it puts some of its mass in regions where the empirical data distribution is very low or even zero.\", \"Theorem 2 (on the convergence of the Wasserstein distance (W2) on high dimensional hyperspheres) does not seem to hold if, for instance, P and P\\u2019 are empirical distributions with overlapping supports. Further, even when the above Theorem holds, the W2 distance may be relatively high since it is proportional to the square root of the number of samples.\", \"Moreover, why and how would Theorem 2 justify improved inference when projecting latent samples onto a hypersphere?\", \"Please consider revising the following statement in the introduction: \\u201cThe encoder f in VAE approximates the posterior q(z|x)\\u201d. The encoder \\u201cf\\u201d in VAE parametrizes the variational posterior.\", \"Some typos,\", \"Abstract, \\u201c\\u2026 by sampling and inference tasks\\u201d -- \\u201con sampling \\u2026\\u201d\", \"Introduction second paragraph after eq 2. \\u201c\\u2026 it also causes the new problems\\u201d \\u2013 \\u201c \\u2026 causes new problems\\u201d\", \"Section 2.1, \\u201cFor convenient analysis \\u2026\\u201d \\u2013 \\u201cFor a convenient \\u2026\\u201d\", \"Second paragraph after Theorem 1. \\u201c\\u2026 perform probabilistic optimizations \\u2026 \\u201d \\u2013 \\u201c\\u2026 optimization \\u2026\\u201d\", \"Section 5.2, second paragraph. Is it Figure 9?\", \"The main recommendations I would make are as follows.\", \"Consider revising the paper to improve its writing.\", \"Provide rigorous theoretical analysis and discussions to support the main claims.\", \"Improve experiments by including more datasets and baselines (e.g., hyperspherical VAE [1]), as well conduct more targeted experiments to give more insights regarding the effect of the L2 normalization on inference and generation.\", \"[1] Davidson, Tim R., et al. \\\"Hyperspherical variational auto-encoders.\\\" UAI, 2018.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed an interesting idea that by regularizing the structure of the latent space into a sphere, we can free VAE from variantional inference framework.\\n\\nHowever, here are several concerns about this paper:\\n1. is this a variant of the Wasserstein auto-encoder?\\n2. the image quality of VAE (CelebA) is not that bad in other VAE papers, maybe tuning the \\\\beta-VAE can also achieve the same quantitative and qualitative results.\\n3. Can you visualize the latent space (z) for the CelebA dataset, also comparing with the results from VAE?\"}",
"{\"comment\": \"Thanks for your very insightful comments, Alex.\\n\\n1) About covering\\n\\nThe case you raised is really challenging. We choose the vector centerization to afford randomness on the sphere. But how well this operation enforces z_i to cover the spherical surface as uniformly as possible is an important topic to study for spherical autoencoder (SAE).\\n\\nThe vector centerization presented in our paper does have flaw. For example, the points on the spherical surface falling into the open positive orthant (z_i > 0) cannot be sampled. Considering that there are all 2^512 orthants in R^512, however, the open positive orthant only takes 1/2^512 part of the whole sphere. So, the negative effect is nearly trivial.\\n\\nWe also figured out another way of sampling on the sphere to circumvent this problem. For any {z_1,...,z_i,...,z_n} drawn from arbitrary distributions, we can first project them on the sphere by z_i <-- z_i/norm(z_i). The projected points probably lie on some specific regions on the sphere. Then we can randomly rotate these points on the sphere by a series of orthogonal matrices that are obtained by orthogonalizing random matrices via Gram\\u2013Schmidt process. In this way, we can get {z_1,...,z_i,...,z_n} that distributes randomly on the sphere as long as the rotation manipulations are sufficient.\\nHowever, this method is not friendly to end-to-end learning for autoencoder. We do not use it in this paper.\\n\\nThe vector centerization and spherization is the simplest way we get to realize our idea, even though it is not perfect. What is most important is that it is very easy to use in the end-to-end architecture of autoencoder.\\n\\n2) About inductive bias\\n\\nTheorem 2 tells that SAE is distribution-agnostic with respect to Wasserstein distance. In other words, it has distributional inductive bias. However, it is very inspiring about your conjecture \\\"Perhaps the inductive bias of the neural network makes this type of issue unlikely\\\".\\n\\nActually, your conjecture leads to the connection between random variables on the sphere and universal approximation theorem (UAT). We also think that this is an alternative way of further exposing the deep reason why the simple spherical constraint can outperform the traditional variational inference. You may refer to the following paper, if interested.\\n\\nSpherical approximate identity neural networks are universal approximators\\n Zarita Zainuddin, Saeed Panahian Fard\\nICNC, 2014\\n\\nYour thoughts are quite inspiring. We will consider the topics you raised seriously for our future work.\", \"title\": \"about covering and inductive bias\"}",
"{\"comment\": \"I think this is an interesting paper and I was quite impressed by the results and elegance of the approach. I have some thoughts about it from a conceptual point of view though.\\n\\n1. I'm curious if the SAE loss going to zero guarantees good samples in a theoretical sense. I'm not sure if this is the case because during training the z's are always projected onto the sphere, but there is no requirement that they cover all of the points on the sphere. So I could imagine a way of packing all of the z's seen during training into a small region on part of the sphere, having these points decode well, and then having all of the other regions decode to bad points. \\n\\nPerhaps the inductive bias of the neural network makes this type of issue unlikely - in either case it makes it interesting that it seems to work so well. (if it's the case it reminds me a bit of the cyclegan, where there is technically a way for the model to do something bad, but it doesn't happen as a result of the inductive bias from the architecture). \\n\\nI think I have a particular construction that you might find interesting. Let\\u2019s say that each real data point is binary, for example x in N^784 (as with binary MNIST). I can encode this digit in a single number xb by laying out the digits: 0.0011010\\u20261 (with each decimal point corresponding to that pixel position). \\n\\nNow let c = sqrt((1 - 2*xb^2) / 2)\\n\\nThen let\\u2019s say z = [xb, -xb, c, -c]. \\n\\nRegardless of xb, so long as it is between (-sqrt(0.5), sqrt(0.5)), this z will be centered and on the sphere. I realize that this is extremely unlikely to be learnable by a NN, especially due to smoothness and its inductive biases. However I still think it's at least interesting to think about. \\n\\n2. I think one reason SAE might work so well in practice is due to the asymmetry in the KL-divergence in the VAE objective, where you have KL(q(z|x) || p(z)). It becomes unbounded and large if q(z|x) ever has support but p(z) doesn't have support. By pushing q(z | x) onto the sphere, and because samples from p(z) are essentially always on the sphere, you guarantee that the KL is at least bounded. In practice this might be enough to make KL(q(z|x)||p(z)) sufficiently small, especially because the SAE doesn\\u2019t have any incentive to concentrate the encoded points in particular parts of z-space, even though it hypothetically could.\", \"title\": \"Some questions related to theory for spherical autoencoder\"}"
]
} |
ryloogSKDS | Deep Orientation Uncertainty Learning based on a Bingham Loss | [
"Igor Gilitschenski",
"Roshni Sahoo",
"Wilko Schwarting",
"Alexander Amini",
"Sertac Karaman",
"Daniela Rus"
] | Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning-based representation for orientation uncertainty. By characterizing uncertainty over unit quaternions with the Bingham distribution, we formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations. | [
"Orientation Estimation",
"Directional Statistics",
"Bingham Distribution"
] | Accept (Poster) | https://openreview.net/pdf?id=ryloogSKDS | https://openreview.net/forum?id=ryloogSKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"-Dd2xEEF2I",
"SylTkGQ3iB",
"HkeGKStvsr",
"BygMvSFPoS",
"HylxNBtvir",
"B1lYHtmY9H",
"ryeBokZ4qB",
"HJx94DXzKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798750857,
1573822949242,
1573520762478,
1573520729979,
1573520679958,
1572579648793,
1572241309202,
1571071794200
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2513/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2513/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2513/AnonReviewer3"
],
[
"~Tolga_Birdal3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers the problem of reasoning about uncertain poses of objects in images. The reviewers agree that this is an interesting direction, and that the paper has interesting technical merit.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Uploaded Revised Version\", \"comment\": \"We would like to thank all reviewers again for their thoughtful feedback. We have uploaded another revised version of the paper. The changes focus on the aspects raised in the reviewers\\u2019 questions:\\n\\n1. We address the main points of Reviewer 2 by including and discuss the training of mixture density networks as well as the incorporation of non-probabilistic baselines (Cosine and MSE loss) on the T-Less dataset. In our new experiments, we also demonstrate how the multi-stage training scheme is beneficial for the unimodal case when operating in a high uncertainty regime.\\n2. We address the points raised by Reviewer 3 in detail by explicitly investigating the role of MAAD and EAAD during training. First, we demonstrate how these metrics can provide insight into the size of the lookup table (we demonstrate how the EAAD may be higher than MAAD if the lookup table does not cover a sufficient range) and provide insights on choosing this range. Second, we further elaborate on approximation quality and overfitting.\\n\\nPlease feel free to get back to us if you have any further questions or feedback and we will be happy to incorporate it.\"}",
"{\"title\": \"Response to Public Comment\", \"comment\": \"Thank you for appreciating our idea and for providing us with some interesting suggestions. We address your questions below.\\n\\n1. Stability of Gram-Schmidt\\nBoth your approach and the Modified Gram-Schmidt (MGS) were added to our experiments. While we did not experience big differences between the Classical Gram-Schmidt (CGS) and MGS, we will add a discussion of numerical robustness to the paper acknowledging the quadratic dependency of CGS on the condition number of the input matrix [GLR05]. \\n\\nFurthermore, we agree that whenever possible simpler neural network modules should be preferred over more complex ones. Particularly when the uncertainty is isotropic (i.e. the first three entries of Z are equal) considerable reductions of the output space are possible and the proposed matrix V can be readily used. As you write in your paper, V(q) is an injective mapping (from a 3 dimensional manifold) to the ring of orthonormal matrices (which is a 6d manifold [Lee03, Example 8.33]). Thus, by using V(q), we would restrict the expressiveness of our loss model.\\n\\n2. Multi-Modal Representation\\nWe are incorporating a discussion of how our approach can be extended to Mixture Models and are currently evaluating Bingham Mixture Density Networks. See the response to the reviewer above for further details.\", \"references\": \"[Lee03] J.M. Lee, Introduction to Smooth Manifolds, Springer, 2003.\\n\\n[GLR05] L. Giraud, J. Langou, M. Rozlo\\u017en\\u00edk, and J. v. d. Eshof. Rounding error analysis of the classical Gram-Schmidt orthogonalization process, Numerische Mathematik 101(87), 2005.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review and for recognizing the contributions of our work. As noted in [LPB2017], calibration and accuracy are two orthogonal concepts and, thus, require independent evaluation. We used the difference between EAAD and MAAD to get some insight into the former that can also easily be interpreted. In practice, the acceptable difference between these two metrics is application dependant. For instance, in certain grasping applications being a few degrees wrong about the orientation of an object might be more acceptable than the same amount of error in motion estimation on autonomous vehicles. We are adding a discussion about this to the experiments section of our paper.\\n\\nReferences\\n[LPB2017] B. Lakshminarayanan A. Pritzel, and C. Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, NeurIPS 2017.\"}",
"{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for considering our paper and recognizing its contribution. We address your questions below.\\n\\n1. \\u201cYou only compared with one baseline. How does the model compare with a loss that is not based on directional statistics or Gaussian models?\\u201d\", \"in_the_newly_added_evaluations_we_included_two_non_probabilistic_baselines_for_upna_and_idiap\": \"The first loss is based on using a mean square error on the difference between the ground-truth and predicted quaternion and the second is a cosine based loss on the biternion representation. We will also add non-probabilistic baselines for T-Less.\\n\\n\\n2. \\u201cHow can you improve the model for multimodal cases?\\u201d\\n\\nMost techniques that are used for estimating multimodal Gaussians are also applicable for the Bingham case. We are currently running evaluations using a Bingham variant of Mixture Density Networks on the T-Less dataset. Analogously to the Gaussian case, this may fail when trained directly. Thus, we first pretrain the MDN assuming the dispersion parameter Z to be fixed. Then, we train the entire network jointly. Similar strategies are also usual for Gaussian MDNs (as e.g. for the MDN baselines in [MICB19]).\\n\\nReferences\\n[MICB19] O. Makansi, E. Ilg, O. Cicek, and T. Brox, Overcoming Limitations of Mixture Density Networks: A Sampling and Fitting Framework for Multimodal Future Prediction, CVPR 2019.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper focuses on the problem of reasoning about uncertain poses and orientations. To address the limitations of current deep learning-based approaches, the authors propose a probabilistic deep learning model with a novel loss function, Bingham loss, to predict uncertain orientations. The experimental results demonstrate the effectiveness of the proposed approach.\\n\\nThis paper is well-motivated and the proposed method addresses important problems in uncertain orientation prediction. The paper is well-supported by theoretical analysis, however, the empirical analysis is a little weak and the model does not consider multimodal cases. For the above reasons, I tend to accept this paper but wouldn't mind rejecting it.\", \"questions\": \"1. You only compared with one baseline. How does the model compare with a loss that is not based on directional statistics or Gaussian models?\\n2. How can you improve the model for multimodal cases?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2513\", \"review\": \"The paper proposes a Brigham loss (based on the Brigham distribution) to model the uncertainty of orientations (an important factor for pose estimation and other tasks). This distribution has the necessary characteristics required to represent orientation uncertainty using quaternions (one way to represent object orientation in 3D) such as antipodal symmetry. The authors propose various additions such as using precomputed lookup tables to represent a simplified version of the normalization constant (to make it computationally tractable), and the use of Expected Absolute Angular Deviation (EAAD) to make the uncertainty of the Bingham distribution more interpretable.\\n\\n+Uncertainty quantification of neural networks is an important problem that I believe should gain more attention so I am happy to see papers such as this one. \\n+Various experiments on multiple datasets show the efficacy of the method as well as out performing or showing comparable results to state-of-the-art\\n\\n-In the caption for Table 1 the author\\u2019s write: \\u201cthe high likelihood and lower difference between EAAD and MAAD indicate that the Bingham loss better captures the underlying noise.\\u201d How much difference between EAAD and MAAD is considered significant and why?\\n\\n-In section 4.5 they write \\u201cWhile Von Mises performs better on the MAAD, we observe that there is a larger difference between the MAAD and EAAD values for the Von Mises distribution than the Bingham distribution. This indicates that the uncertainty estimates of the Von Mises distribution may be overconfident.\\u201d Same question as above. What amount of difference between MAAD and EAAD is considered significant and why?\"}",
"{\"comment\": \"This is indeed an important problem and the authors take an important direction. However, I would like to point out a couple of obvious issues that prevent this paper to be a good one:\\n\\n1. The Gram-Schmidt (GS) orthonormalization process is unnecessary and I would like to discourage the authors and the community from following that path. I understand that today's auto-grad methods make it seamless to use such arbitrary complex functions. But in this case there are two issues: (a). If one were to differentiate the GS by hand one would get a good grasp of the complexity involved. This means that the network will train or run slower. (b). Even if the computational aspects are not a problem, GS is known to be numerically unstable and one usually ends up with vectors that are often not quite orthogonal. Modified-GS (not employed in this paper) is one way to overcome this, but there is no guarantee to avoid the numerical issues altogether.\\n\\nUsing the parallelizable nature of quaternions, in our NeurIPS 2018 work, we have already given an alternative way as a more elegant solution to construct the M matrix for quaternions and this doesn't suffer from numerical issues: https://arxiv.org/pdf/1805.12279.pdf (look above equation 6). I suggest to employ that in future research.\\n\\n2. The paper only deals with the single modal case. This is of some limited interest only. I would certainly like to see a better treatment of this problem, rather than a half done solution.\\nThe paper claims that it deliberately avoids using \\\"Mixture density networks\\\" (MDN) for evaluation reasons. However, even if used, MDNs are known to suffer for such higher dimensional and complex multimodal distributions. One certainly needs cleverer strategies and I encourage the authors to continue to work on the problem, making it a strong contribution. It is not very convincing to just leave out the multimodal scenario.\", \"title\": \"Good idea. Please polish further though.\"}"
]
} |
B1xoserKPH | Analyzing Privacy Loss in Updates of Natural Language Models | [
"Shruti Tople",
"Marc Brockschmidt",
"Boris Köpf",
"Olga Ohrimenko",
"Santiago Zanella-Béguelin"
] | To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update.
We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect. | [
"Language Modelling",
"Privacy"
] | Reject | https://openreview.net/pdf?id=B1xoserKPH | https://openreview.net/forum?id=B1xoserKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"1T8Z0uBOH",
"BkxEloKniH",
"B1xn_vNhjr",
"Syl2wVmXjH",
"S1lzB4m7jS",
"ryg4Q4m7or",
"H1xFTQQmiS",
"B1gdvQXmoH",
"SyejLWDpKr",
"BJg-Ic3hFr",
"H1gshz42KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750826,
1573849835566,
1573828467676,
1573233764272,
1573233722064,
1573233692338,
1573233600557,
1573233503548,
1571807570880,
1571764809113,
1571730099389
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2512/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2512/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2512/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2512/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper report empirical implications of privacy \\u2018leaks\\u2019 in language models. Reviewers generally agree that the results look promising and interesting, but the paper isn\\u2019t fully developed yet. A few pointed out that framing the paper better to better indicate broader implications of the observed symptoms would greatly improve the paper. Another pointed out better placing this work in the context of other related work. Overall, this paper could use another cycle of polishing/enhancing the results.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Author\", \"comment\": \"By well-trained, I mean models that are empirically competitive on LM benchmarks. The point you make about larger/higher-capacity models memorizing large amounts of data is true and I'd also theorize that these models will exacerbate the problem that you observe. I didn't consider that initially.\\n\\nI appreciate the extra experiments that help clarify the approach. Thanks for clarifying! Overall I think the paper is still a weak accept and I recommend my initial score of a 6.\"}",
"{\"title\": \"Addendum\", \"comment\": \"The latest revision adds experimental results on re-training with different data splits and continued training on real-world data from the 20 Newsgroups dataset. See RQ4A, RQ4B and Table 5 in Section 3.3 for an analysis of the results.\\nThis expands the experiments on synthetic data (canary phrases) added in the previous revision.\"}",
"{\"title\": \"Discussion of Review #1\", \"comment\": \"Thank you for recognizing the importance of the problem and giving feedback to our submission.\\n\\n> Concerns: I don't know how generalizable these results would be on \\n> really well-trained language models (rnn, convolution-based, or\\n> transformer-based).\\n\\nIt would be helpful if you could be more precise about what you mean by \\u201creally well-trained\\\" here. The tested models are implementing standard practices of token-level language modeling (i.e., regularization via (recurrent) dropout, tying of token embeddings and output projection, \\u2026) but are not aiming to compete with recent high-capacity efforts such as GPT-2. The experiments show that our findings hold for both RNN-based and Transformer-based models (and it would be surprising if convnets would behave differently) of small and medium capacity. It seems intuitive that higher-capacity models would only exacerbate the problem, as they are known to be prone to memorizing substantial input chunks.\\n\\nOur paper tries to provide experimental evidence of an angle of attack that has not been studied well, and thus serves as a warning signal to practitioners that are deploying language models in the wild.\\n\\n> The related work section doesn't seem particularly well put together,\\n> so its difficult to place the work in appropriate context and gauge\\n> its impact.\\n\\nOur work falls under \\u201cattacks on ML models\\u201d topic and specifically language models. It is not yet well-understood what language models memorize and can leak; to this end, our paper proposes a new angle of extracting data that ML community has to be aware of. We believe we discuss the most relevant previous works in this topic, but we welcome suggestions about anything we may have missed.\\n\\n> Other Thoughts: I'd like more thorough error analysis looking at exactly\\n> what kinds of strings/more nuanced properties of sequences that get\\n> a high differential score. \\n\\nIn initial experiments, we found that out-of-distribution sequences (such as the \\u201cMy social security number is [\\u2026]\\u201d used in Carlini et al. (2019)) are very easy to extract from the difference between two models. This is why we focused on manually creating sentences that are \\\"near\\\" to the training data by following a simple valid grammatical format and then varied the frequency of the used terms. In practice, it turns out that the models behave very much like expected: If canaries use very frequent words, extracting them becomes harder; if they use infrequent words, extracting them becomes easier.\\n\\n> Overall I think this work is interesting and I would encourage the\\n> authors to try and add as much quantitative evaluation as possible,\\n> but also try and include qualitative information regarding specific\\n> sequences after prodding the models. Those could go a long way in\\n> strengthening the paper.\\n\\nWe have updated the paper with experiments using several new settings of data splits and continued training setup (for example, where the model is trained on the original data, and then \\u201cfine-tuned\\u201d on a smaller additional dataset). We refer the reviewer for detailed explanation in the general response. New content is presented in Section 3.2 (RQ3A, RQ3B) of the updated submission.\"}",
"{\"title\": \"Discussion of Review #3 (Part 2/2)\", \"comment\": \"> I would also note that the motivation, a predictive keyboard, is not a\\n> situation in which maximizing accuracy is generally desirable: users\\n> tend to find this creepy rather than helpful. This is a nice idea but\\n> would benefit from some more polishing and more extensive testing.\\n\\nIn the Smart Compose setting, the user, as she is typing her email, is given several choices for the next token. In order for Smart Compose to be useful, there should be an intersection between what the user intends to write and the choices suggested. Hence, maximizing accuracy is important, though of course striking a balance to avoid \\u201ctoo personal\\u201d suggestions is important. How to strike this balance is out of scope for this paper.\\nHowever, if models are personalized (or at least customized to a group similar to a specific user), they do have to shift their recommendations slightly to better match the data distribution of the data used for customization. In our submission we argue that this _shift_ is already leaking private information. You seem to be referring to the \\u201cMy social security number is\\u201d prefix setting of the Secret Sharer work of Carlini et al. (2019), in which the leakage happens because private information is the most likely prediction. However, our analysis of model updates shows that leakage also happens when (a) the leaked data is _not_ the top prediction of any individual model and (b) no prefixes are available.\"}",
"{\"title\": \"Discussion of Review #3 (Part 1/2)\", \"comment\": \"Thank you for your feedback and we hope that our comments below can resolve your concerns. Note that this response is split into two parts for character limit reasons.\\n\\n> the synthetic experiments around which much of the paper is based may\\n> not be sufficiently novel\\n\\nThe paper presents the first study of privacy implications of releasing snapshots of language models trained on overlapping data. Contributions include the new attack scenario and how to carry out the attack in a realistic setting with minimal assumptions on the attacker.\\n\\n> give little indication of broader implications.\", \"breadth_of_implications\": \"The main implication is that practitioners should be very careful when releasing models that have been trained on overlapping datasets since the difference between the datasets, as we show, can be leaked. Since releasing updated models is common (e.g., due to GDPR), we believe it is a serious concern the language modelling community needs to be aware of.\\n\\n> It would have been more convincing if these results replicated with two\\n> splits of the same dataset, rather than identical datasets with one \\n> augmented by canary tokens. \\n\\nSince the submission deadline, we performed a range of experiments to evaluate information leakage in other data-overlapping scenarios. The updated submission presents these results in Table 2 and in Section 3.2. Summary of this experiment is given in the general response above.\\n\\n> The qualitative evaluation of subject-specific updates is also not\\n> sufficiently informative. It would have been useful to define a\\n> specific attack and see under what circumstances such an attack would\\n> succeed. In the current results, I am not convinced that any of the\\n> phrases in Table 3 represent a privacy violation.\\n\\nThe qualitative evaluation in Section 3.3 shows that our attacks recover phrases related to the content of the data used to update the model rather than to the rest of the data. If this data has been selected from private conversations instead of public discussions in a newsgroup, an attacker would be able to infer recurrent conversation topics, violating the privacy of the participants.\\n\\nTo define what it means for a specific attack to succeed, we would need a quantitative measure of success. We are exploring one such measure: train a classifier that discriminates between the public and training data used to update a model and compute the sum of the probabilities with which the discriminator classifies the phrases extracted as belonging to the private data. We think that the phrases output by our attack would be overwhelmingly classified as coming from the private data. We expect to include the results of our experiments when finalizing the submission.\\n\\n> The differential privacy experiment seems to be missing many details:\\n\\nWe have updated Section 4 as per reviewer\\u2019s comments and outline them below:\\n\\n> what dataset was this trained on?\\n\\nWe used the Penn Treebank dataset.\\n\\n> Are the accuracy values for the training set or a separate testing set?\\n\\nAll accuracies that we report are for a separate validation set.\\n-\\tTraining accuracies are (29.73%,11.52%, 13%) for (non-DP, eps=5, eps=111), respectively. \\n-\\tValidation accuracies are (23%, 11.89%, 13.34%) for (non-DP, eps=5, eps=111), respectively.\\n\\nWe note that the discrepancy between training and validation accuracies is consistent with previous results on DP training.\\n\\n> Other works have shown that it is possible to train a differentially\\n> private language model without large sacrifices in accuracy, so it\\n> would be helpful to know what differentiates this experiment.\\n\\nThe language models trained in McMahan et al. consider user-level privacy (i.e., batch of token-sequences), while we consider single token-sequence-level privacy. Hence, the gradient clipping is done per sequence in our case and not per batch. The difference between batch-level data in terms of gradients is smaller than that of sequence-level gradients (intuitively the differences are ``averaged\\u2019\\u2019 in a batch). As a result, updates are not \\u201ctoo different\\u201d between users. Hence, the noise, which is proportional to the change in the gradient, that needs to be added is much smaller when guaranteeing user-level privacy as compared to sequence-level privacy.\\nThe model trained by Carlini et al. is for a character prediction task which is a much simpler task than token prediction considered in our paper.\\nThat said, training privacy-preserving models with sequence-level privacy and good utility is an important research question but out of scope for this work.\"}",
"{\"title\": \"Discussion of Review #2\", \"comment\": \"Thank you for engaging with our submission and asking questions!\\n\\n> According to the current paper, the privacy implication seems to be\\n> defined in terms of general sequences in training datasets. If this is\\n> the case, I don\\u2019t think such privacy implication is meaningful because\\n> our language models should memorize some general information to\\n> achieve their tasks.\\n\\nOur experiments show that much more than general information is revealed through model updates, because _specific_ phrases occurring in the training data as rarely as one in a million times can be extracted.\\nAs an example, consider the case of a \\u201cSmart Compose\\u201d feature (i.e., email auto-completion) using a model trained on data from a given company, and from which employees can extract phrases of the form \\u201cWe will close [city] office\\u201d, because similar phrases occur multiple times in emails of C-suite managers.\\n\\n> 1. In the experiments, why there are only 20000 vocabulary size for\\n> Wikitext-103 datasets?\\n\\n20k was primarily chosen for performance reasons, both during our experiments as well as when considering the application scenario of predictive keyboards on client devices (where deploying the full Wikitext-103 vocabulary of size 267k would be infeasible in most cases). While 20k is indeed somewhat arbitrary, we are confident that increasing the vocabulary size (i.e., increasing the capacity of the model) would not change the direction of our results in a substantial way.\\n\\n> 2. It is unclear how to construct canary phrases.\\n\\nOur experiments use canaries that\\n * serve as a proxy for \\\"private data\\\", i.e. they should be grammatically correct but must not appear in the original dataset, and\\n * exhibit different token frequency characteristics \\nTo this end, we choose different valid phrase structures (e.g. Subject, Verb, Adverb, Compound Object) and instantiate each placeholder with a token that has the desired frequency characteristic in the dataset under consideration.\\nFor our experiments we construct the canaries manually, but automation is straightforward. We updated the description of the canary construction in Section 3.2. accordingly.\\n\\n> 3. After constructing the new dataset, the model is retrained or\\n> trained in the online way?\\n\\nThe submitted version of our paper retrained the model from scratch. However, we have updated the paper with experiments using a continued training setup, in which the model is trained on the original data, and then \\u201cfine-tuned\\u201d on a smaller additional dataset. Please see RQ3B and Table 2 in Section 3.2 and its summary in the general response.\\n\\n> 4. Since the results for Wikitext-103 is not finished, the authors\\n> should remove the results on this dataset.\\n\\nWe have updated Table 1 in the paper with the results for Wikitext-103.\\n\\n> 5. What is the perplexity of the trained models?\\n\\nThe validation perplexity for the trained models is as follows (we added this information to Table 1):\", \"penn_treebank\": \"120.90\\nReddit (RNN): 79.63\\nReddit (Transformer): 69.29\", \"wikitext_103\": \"48.59\\n\\n> 6. How to choose initial sequence in real data experiments?\\n\\nWhen we compute the differential rank in RQ1,2,4-7, we compare the\\nchanges in probability of all token sequences [*], that is, there is no\\nneed for the adversary to choose any specific initial sequence.\\nIn RQ3 we show that partial knowledge (i.e., knowledge of an initial\\nsequence) about data used in the update can lead to more effective attacks.\\n\\n[*] In practice, we approximate this with a beam search, as discussed in Section 2.4.\\n\\n> 7. When you applying DP mechanism,how did you define the neighboring\\n> datasets, and how did you implement it (what is the clipping level, how\\n> did you calculate privacy loss for language models)?\", \"we_use_sequence_level_differential_privacy\": \"i.e., two neighbouring datasets differ in a single sequence of tokens.\", \"we_used_the_tensorflow_privacy_library_for\": \"(1)\\tTraining with differentially private SGD. We used the Sampled Gaussian Mechanism that is provided by the library with the following parameters: \\n- For eps=5: noise_multiplier=0.7, l2_norm_clip=5.0, sampling probability= .0048\\n- For eps=111: noise_multiplier=0.3, l2_norm_clip=5.0, sampling_probability= .0024.\\n(2)\\tComputing privacy loss. The library uses a Renyi differentially privacy accountant for computing the total privacy loss.\\n\\n> 8. \\\\epsilon = 111 seems that the model will provide no privacy guarantee\\n> according to the definition of differential privacy?\\n\\nIndeed, for a large epsilon, DP provides weak theoretical guarantees. However, our experiments show that it can still provide effective protection against our attack. This confirms results reported by Carlini et al. who show that current DP analyses come with (potentially overly) conservative bounds.\"}",
"{\"title\": \"General Review Response and Overview of Paper Revision\", \"comment\": \"We would like to clarify the core contributions of our paper. Training ML models on private data raises concerns about how much of this data is memorized and leaked by the models. In this paper, we advance the state-of-the art in this space as follows:\\n\\n1.\\tWe analyze privacy in an important novel attack scenario: Given API access to two models, one trained on a dataset $D$ and the other on $D + \\\\Delta$, where $\\\\Delta$ includes private data, is it possible to extract information about $\\\\Delta$? This question needs to be answered, for example, when augmenting models that are pre-trained on massive public datasets with private data, and when deleting a user\\u2019s data from a dataset, e.g., following GDPR. \\n\\n2.\\tWe show that the threat to privacy is real: An attacker can successfully recover information about $\\\\Delta$ from the inference outputs of the models. The attack is effective even without background knowledge about $D$ or $\\\\Delta$.\\n\\n\\n= Summary of changes we made to the paper:\\t\\n1. Added missing values for Wikitext-103 model in Table 1.\\n2. Added the validation perplexity of each of the models in Table 1.\\n3. Added experimental results on re-training with different data splits and continued training. RQ3A and RQ3B and Table 2 summarize the results of our experiments.\\n4. Clarified the construction of canaries in Section 3.2.\\n5. Clarified DP experiments in Section 4.\\n\\n= Summary of new experimental results (Table 2, Section 3.2):\\nThe original submission extracted information about $\\\\Delta$ between model $M$, trained on data $D_{orig}$, and updated model $M\\u2019$, trained on $D_{orig}$ and $\\\\Delta$, where $\\\\Delta$ was either canary phrases or a newsgroup. We added results for the following settings:\\n1.\\t$M\\u2019$ is trained on $D_{orig}$ +canaries+$D_{extra}$ [RQ3A in Section 3.2]:: Additional text $ D_{extra}$ did not affect the differential score (DS) of the canary phrase as the score remained constant for different splits between $D_{orig}$ and $D_{extra}$. Thus, the canaries are susceptible to leakage even when the updated model is trained using additional dataset.\\n2.\\tContinued training [RQ3B in Section 3.2]: $M\\u2019$ is initialized with parameters of $M$ and is trained further with new data $D_{extra}$ and canaries. In this setting, we observed that DS values have increased (i.e., higher susceptibility to leakage) in comparison to when $M\\u2019$ is trained from scratch on $D_{orig}$+$D_{extra}$+canaries. \\n3.\\tContinued training with two stages [RQ3B in Section 3.2]: An intermediate model $\\\\tilde{M}$ is updated as above, i.e., initialized with $M$ and updated with $D_{extra}$+canaries. The final model, $M\\u2019$ is trained starting from $\\\\tilde {M}$ and extra dataset $D\\u2019_{extra}$. We observed that the DS substantially reduces in this setting making it suitable as a potential mitigation strategy.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the privacy issue of widely used neural language models in the current literature. The authors consider the privacy implication phenomena of two model snapshots before and after an update. The updating setting considered in this paper is kind of interesting. However, the contribution of the current paper is not strong enough and there are many unclear experimental settings in the current paper.\\n\\nAccording to the current paper, the privacy implication seems to be defined in terms of general sequences in training datasets. If this is the case, I don\\u2019t think such privacy implication is meaningful because our language models should memorize some general information to achieve their tasks.\", \"there_are_some_unclear_settings_in_the_experiments\": \"1.In the experiments, why there are only 20000 vocabulary size for Wikitext-103 datasets?\\n2.It is unclear how to construct canary phrases. \\n3.After constructing the new dataset, the model is retrained or trained in the online way?\\n4.Since the results for Wikitext-103 is not finished, the authors should remove the results on this dataset.\\n5.What is the perplexity of the trained models?\\n6.How to choose initial sequence in real data experiments?\\n7.When you applying DP mechanism, how did you define the neighboring datasets, and how did you implement it (what is the clipping level, how did you calculate privacy loss for language models)?\\n8.$\\\\epsilon=111$ seems that the model will provide no privacy guarantee according to the definition of differential privacy?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides an empirical evaluation of the privacy implications of releasing updated versions of language models. The authors show how access to two sequential snapshots of a trained language model can reveal highly specific information about the content of the data used to update the model, even when that data is in-distribution.\\n\\nThe paper contains easy to understand, concrete experiments and results, but seems altogether a little underdeveloped. The methodology is sound, but the synthetic experiments around which much of the paper is based may not be sufficiently novel and give little indication of broader implications. It would have been more convincing if these results replicated with two splits of the same dataset, rather than identical datasets with one augmented by canary tokens. \\n \\nThe qualitative evaluation of subject-specific updates is also not sufficiently informative. It would have been useful to define a specific attack and see under what circumstances such an attack would succeed. In the current results, I am not convinced that any of the phrases in Table 3 represent a privacy violation.\", \"the_differential_privacy_experiment_seems_to_be_missing_many_details\": \"what dataset was this trained on? Are the accuracy values for the training set or a separate testing set? Other works have shown that it is possible to train a differentially private language model without large sacrifices in accuracy, so it would be helpful to know what differentiates this experiment.\\n\\nI would also note that the motivation, a predictive keyboard, is not a situation in which maximizing accuracy is generally desirable: users tend to find this creepy rather than helpful.\\n\\nThis is a nice idea but would benefit from some more polishing and more extensive testing.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: This paper looks at privacy concerns regarding data for a specific model before and after a single update. It discusses the privacy concerns thoroughly and look at language modeling as a representative task. They find that there are plenty of cases namely when the composition of the sequences involve low frequency words, that a lot of information leak occurs.\", \"positives\": \"The ideas and style of research is nice. This is an important problem and I think this paper does a good job investigating this in the context of language modeling. I do hope the community (and I think the community is) moving towards being aware of these sorts of privacy issues.\", \"concerns\": \"I don't know how generalizable these results would be on really well-trained language models (rnn, convolution-based, or transformer-based). The related work section doesn't seem particularly well put together, so its difficult to place the work in appropriate context and gauge its impact.\", \"other_thoughts\": \"I'd like more thorough error analysis looking at exactly what kinds of strings/more nuanced properties of sequences that get a high differential score.\\n\\nOverall I think this work is interesting and I would encourage the authors to try and add as much quantitative evaluation as possible, but also try and include qualitative information regarding specific sequences after prodding the models. Those could go a long way in strengthening the paper.\"}"
]
} |
HygPjlrYvB | Learning from Positive and Unlabeled Data with Adversarial Training | [
"Wenpeng Hu",
"Ran Le",
"Bing Liu",
"Feng Ji",
"Haiqing Chen",
"Dongyan Zhao",
"Jinwen Ma",
"Rui Yan"
] | Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples. This paper shows that the GAN (Generative Adversarial Networks) style of adversarial training is quite suitable for PU learning. GAN learns a generator to generate data (e.g., images) to fool a discriminator which tries to determine whether the generated data belong to a (positive) training class. PU learning is similar and can be naturally casted as trying to identify (not generate) likely positive data from the unlabeled set also to fool a discriminator that determines whether the identified likely positive data from the unlabeled set (U) are indeed positive (P). A direct adaptation of GAN for PU learning does not produce a strong classifier. This paper proposes a more effective method called Predictive Adversarial Networks (PAN) using a new objective function based on KL-divergence, which performs much better.~Empirical evaluation using both image and text data shows the effectiveness of PAN. | [
"Positive and Unlabeled learning"
] | Reject | https://openreview.net/pdf?id=HygPjlrYvB | https://openreview.net/forum?id=HygPjlrYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GVpbncQP92",
"ryxIKrdsiS",
"H1enVb4Kor",
"SJeZrS1_sr",
"r1lbxrkdjH",
"SJg7FV1OoS",
"SylVX4J_jH",
"HkxCbL-e9H",
"SylW40kRFH",
"Bke8e-RTFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750796,
1573778814383,
1573630259926,
1573545272954,
1573545193400,
1573545083084,
1573544987682,
1571980806014,
1571843625307,
1571836142103
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2511/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2511/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2511/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2511/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2511/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Thanks for your feedback to the reviewers, which helped us a lot to better understand your paper.\\nThrough the discussion, the overall evaluation of this paper was significantly improved.\\nHowever, given the very high competition at ICLR2020, this submission is still below the bar unfortunately.\\nWe hope that the discussion with the reviewers will help you improve your paper for potential future publication.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you very much for your new comments.\", \"comment\": \"Thank you very much for your new comments.\", \"re\": \"My major concern is \\\"whether or not we can obtain the optimal (unbiased) classifier by the optimization in Eq. (3)?\\\". It is obvious in case of Eq. (2), because D(.) performs worst if C(.) perfectly extracts the positive samples from unlabeled data. On the other hand, it is not clear in case of Eq. (3). When we only consider term I, D(.) is biased since the unlabeled data contain the positive samples. I imagine that term II and III reduce this bias to obtain the unbiased classifier C(.), but it is not clearly shown. (While I understand the authors' intention described in section 4.1, it is not supported well in theory.) In addition, tuning tuning lambda seems to play almost same role with the class-prior estimation, if the above intuition is correct.\\n\\nResponse> Our method PAN (Eq. (3) or (4)) basically follows the similar idea to that of Eq. (2). In Eq. (2), D performs worst if C(.) perfectly extracts the positive samples from the unlabeled set, which is correct as D(.) is unable to classify/separate the given positive data and the possible positives x\\u2019 extracted by C(.). In the case of PAN, that also means D(.) gives high positive probability scores to x\\u2019 like C(.). Thus, the final training result is that D(.) and C(.) give similar predictions, or D(.) cannot move away from C(.) (meaning D(.) also gives high scores to the examples that get high scores from C(.)).\\n\\nWe agree that Eq. (3) is harder to understand as it is different from GAN and also because for KL-divergence, the probabilities of a distribution can go up or down in order to match another distribution, which makes it more difficult to explain as there are many cases. Yes, your intuition is correct. Terms II and III try to correct the bias of D(.) in term I of Eq. (3). But let us see the idea using Eq. (4), which is derived from Eq. (3) for training and it is much clearer than Eq. (3). Note that the bias in term I in Eq (3) will result in high precision and low recall for the positive class. Now back to Eq (4) and let us imagine that most examples are regarded as negatives by C(.) (low recall). Then, from Eq. (4), we can see the value of term V is below zero. When optimizing D(.), term VI will push D(.) up for these examples, and thus the bias is reduced and the low recall problem is mitigated as in the next optimization iteration, C(.) for the examples will also go up following D(.). We have added more explanation in the paper using Eq. (4) in Appendix C.\\n\\nRegarding lambda, in our experiments it is fixed to 0.0001 for all experiments. It can have some indirect effect of correcting the bias but we believe the main effect is from above. \\n\\nHope our explanation is clear. If you have any additional questions, please let us know. We will address or clarify them quickly.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank the authors for the response.\\n\\n1. My comment is not about how C and D are implemented but about their mathematical definitions. Specifically, clarifying the input and output space of the function is important. The authors use C(.) for a vector-to-scalar mapping, though C_i is for a vector-to-vector mapping. \\n\\n3. My major concern is \\\"whether or not we can obtain the optimal (unbiased) classifier by the optimization in Eq. (3)?\\\". It is obvious in case of Eq. (2), because D performs worst if C perfectly extracts the positive samples from unlabaled data. On the other hand, it is not clear in case of Eq. (3). When we only consider term I, D is biased since the unlabaled data contain the positive samples. I imagine that term II and III reduce this bias to obtain the unbiased classifier C, but it is not clearly shown. (While I understand the authors' intention described in section 4.1, it is not supported well in theory.) In addition, tuning lambda seems to play almost same role with the class-prior estimation, if the above intuition is correct.\"}",
"{\"title\": \"Thank you very much for your helpful comments. (PART 2)\", \"comment\": \"Thank you very much for your helpful comments. We have addressed your concerns in the revised paper (uploaded). Below are our answers to your questions.\", \"re\": \"\\u201cAlthough the problem setting is quite different, the idea of this paper is partially similar to the importance weighting technique adopted in some recent domain adaptation methods [R1, R2]. Do you have any comment on that?\\u201d\\n\\nResponse> Thanks for pointing this out and the two relevant papers. We read the two papers and have cited them and compared them with our work. Our a-GAN method has some similarity with the weighted adversarial nets (WAN), but our PAN differs significantly from WAN (as PAN differs significantly from a-GAN). That is because although WAN weighted D by w(z) but the adversarial training procedure is the same as the original GAN (similar to our a-GAN). The examples generated by G are fed into D for discrimination. A-GAN uses the same strategy, which did not work well in our case. That is why we designed a new formulation in PAN, which uses KL-divergence and the three terms in Eq. (3) to solve the problem as we discussed in Section 4. \\n\\nHope our responses are clear. If you have any additional questions, please let us know. We will address or clarify them.\"}",
"{\"title\": \"Thank you very much for your helpful comments. (PART 1)\", \"comment\": \"Thank you very much for your helpful comments. We have addressed your concerns in the revised paper (uploaded). Below are our answers to your questions.\", \"re\": \"\\u201cAbout II: the authors explain the role of this term by min-max game between C and D during optimization, but the most important point here is what will happen when we obtain the optimal C and D after the optimization. What property or behavior do the optimal C and D have?\\u201d and \\u201c- What do the authors want to claim with Proposition 1? The right-hand side of Eq. (5) cannot be easily calculated due to the density ratio between P^p and P^u. There is no explanation about what f and eps mean. What ``optimal\\\" means is also ambiguous.\\u201d :\\n\\nResponse> Intuitively, the behaviors of optimal C and D are that C gives the same prediction as D while D cannot move away from C, which means D also give high scores to examples that get high scores from C. \\n\\nWe use Proposition 1 to decide the decision surface of our model. However, due to the complexity of the PU learning setting, f() is very complex (but it can be computed). Thus, we showed its properties, which we believe is sufficient. Please see Appendix C. We added explanations of the behaviors of optimal C and D, and moved Proposition 1 to Appendix C.\\n\\nHope our responses are clear. If you have any additional questions, please let us know. We will address or clarify them.\"}",
"{\"title\": \"Thank you very much for your helpful comments.\", \"comment\": \"Thank you very much for your helpful comments. We have addressed your concerns and improved the clarity of the paper and uploaded it.\", \"re\": \"The authors claim that Eq. (2) cannot be trained in an end-to-end fashion directly, this statement may need some modification since there are some existing works replacing c(x) by a score function or some other continuous function and then this direct adaptation can be trained, for example, see Eq. (5) in \\u201cDiscriminative adversarial networks for positive-unlabeled learning. arXiv:1906.00642, 2019\\u201d. Can any explanation be given on this?\\n\\nResponse> We are wondering where we made that claim. We did not make that claim in the submitted version. Could you please check again? If you find that claim, please let us know the location and we will definitely revise it. Actually, our a-GAN is trained in an end-to-end fashion. The recent arXiv paper (which should be done at about the same time as our paper) that you mentioned is similar to a-GAN, but our PAN differs from it significantly. We have cited and discussed it in the revised version.\", \"for_other_questions\": \"Response> Thank you for your suggestions to improve the clarity of the paper. We have revised it and make things clearer. About hyperparameter tuning, we gave an analysis in Appendix D.2. Could you refer to that for more details and let us know whether it is satisfactory. \\n\\nIf you have any additional questions, please let us know. We will address or clarify them.\"}",
"{\"title\": \"Thank you very much for your helpful comments.\", \"comment\": \"Thank you very much for your helpful comments. We have addressed your concerns in the revised paper, which has been uploaded.\", \"re\": \"* Using MLP classifier for the text classification (e.g., for YELP) makes a very weak baseline for the system. Also, training the word embedding by the system itself is unrealistic. Therefore, the sentence might need to be rewritten.\\n\\nResponse> Thanks. We used pre-trained word embeddings learned by the skip gram method of word2vec on the corresponding datasets. Perhaps there is a misunderstanding about classifier. The classifier is a 2-layer convolutional network (CNN), with 5 * 100 and 3 * 100 convolutions for layers 1 and 2 respectively, and 100 filters for each layer. An MLP layer follows to map the output features to the final decision scores. Only the MNIST dataset uses MLP only.\\n\\n**Thanks for pointing out some readability issues. We have revised the paper and uploaded the new version. If you have any more questions, please let us know. We will address them and make everything clear.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed an interesting idea of using two adversarial classifiers for PU learning. The first classifier tries to introduce samples from unlabeled data that are similar to the existing labeled positive data, and the second one tries to detect if a sample has a ground truth label or drawn from unlabeled data by the first classifier (fake). The idea of the paper is interesting; it is well-motivated and well supported with a range of experiments from NLP and computer vision tasks. The paper is a good read but required a pass of proofreading; some rearrangement of the concepts (for example, in second paragraph C(.) and D(.) is used, but they are introduced properly in section 4. Also, the paper could use some clarifications.\\n\\n* How the proposed method handles unlabeled positive samples that have a different distribution from P or has a similar distribution with some of the negative samples that might exist in unlabeled samples. \\n\\n* The experiment section could have enjoyed from an ablation study in which a system that only implements terms I and II from Eq(3). The authors mentioned that such an objective function is asymmetric but didn't explore the implications of such an objective function in the empirical experiments.\\n\\n* PGAN's results are not compared in the fair condition since the PU version of CIFAR 10 is different from PGAN's version.\\n\\n* Using MLP classifier for the text classification (e.g., for YELP) makes a very weak baseline for the system. Also, training the word embedding by the system itself is unrealistic. Therefore, the sentence might need to be rewritten.\\n\\n* Some readability issues: \\n(i) C(.) and D(.) needs an introduction in the section \\\"I Introduction\\\" before their usage.\\n(ii) The idea could be illustrated easily. Such a figure significantly improves the readability of the system.\\n(iii) be careful with the use of \\\\cite{} and \\\\citep{} interchangeably (\\\"{Liu et al., 2003; Shi et al., 2018\\\" -> Liu et al. (2003) and Shi et al. (2018) ..., \\n(iv) The first paragraph of section 2 should be split into two from this phrase \\\"None of these works...\\\"\\n(v) Please rewrite the latter half of paragraph 2 in section 2. Also, please rewrite the beginning sentences of section 4.1 and the final paragraph of section 3.\\n(vi) right after equation (2), please change x_s to \\\\mathbf(x)^s for the consistency of your formulation.\\n(vii) favorible -> favorable, radio (in section 5.1) -> ratio,\\n(viii) please add a reference for this statement. \\\"This is one of the best architectures for CIFAR10.\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the problem of learning a binary classifier from only positive and unlabeled data (PU learning), where they develop a Predictive Adversarial Networks (PAN) method by using the GAN-like network architecture with a KL-divergence based objective function. Experiments and comparisons with SOTA are provided.\", \"pros\": \"Their idea of making an adaption to GAN architecture by replacing the generator by a classifier to select P from U and using the discriminator to distinguish whether the selected data is from P or U for PU learning is interesting, and benefits from not relying on the class prior estimation.\", \"question\": \"The authors claim that Eq. (2) cannot be trained in an end-to-end fashion directly, this statement may need some modification since there are some existing works replacing c(x) by a score function or some other continuous function and then this direct adaptation can be trained, for example, see Eq. (5) in \\u201cDiscriminative adversarial networks for positive-unlabeled learning. arXiv:1906.00642, 2019\\u201d. Can any explanation be given on this?\", \"remarks\": \"The clarity of the paper could be improved in multiple places. For example, the data generation processes can be mathematically defined in the problem setting part, now it is quite confusing to me. And more details on experimental protocol may be needed: e.g. what kind of hyperparameter tuning was done?\\n \\nIn general, the paper proposed an interesting GAN-like network architecture to learn from PU data, but some unclear parts need to be improved.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"<Paper summary>\\nThe authors proposed a novel method for positive-unlabeled learning. In the proposed method, adversarial training is adopted to extract positive samples from unlabeled data. In the experiments, the proposed method achieves better performance compared with state-of-the-art methods. \\n\\n<Review summary>\\nAlthough the idea to utilize adversarial training for PU learning is interesting, the proposed method is not sufficiently validated in theory. In addition, the manuscript is hard to follow due to confusing notations and lack of figures. I vote for rejection.\\n\\n<Details>\\n* Strength\\n + The main idea is simple and interesting.\\n + The proposed method performs well in the experiments.\\n\\n* Weakness and concerns\\n - Confusing notations and lack of figures.\\n -- Lack of mathematical definition of C and D.\\n -- The argument of P^p and that of P^u are different (x^p and x^u), which implies that those distributions are defined at different space (but actually same).\\n -- Shared index ``i\\\" for positive and unlabeled data in Eq. (3).\\n -- The notation with ``hat\\\" often imply the empirically estimated (or approximated) value in the field of ML. \\n -- No figures about the proposed method. Specifically, it is hard to understand the relationship between C and D. \\n\\n - Since Eq. (3) looks totally different from Eq. (2), why Eq. (3) is reasonable remains unclear. \\n -- About I: first, P^{pu} cannot be calculated, because it requires unavailable labels of x^u. If you treat unlabeled data as negative, it should not be called ``ground-truth,\\\" and the term I cannot help D correctly recognize positive samples. Second, the positive samples are almost ignored in this term, because the number of positive data should be substantially small in a common setting of PU learning. \\n -- About II: the authors explain the role of this term by min-max game between C and D during optimization, but the most important point here is what will happen when we obtain the optimal C and D after the optimization. What property or behavior do the optimal C and D have? \\n\\n - What do the authors want to claim with Proposition 1? The right-hand side of Eq. (5) cannot be easily calculated due to the density ratio between P^p and P^u. There is no explanation about what f and eps mean. What ``optimal\\\" means is also ambiguous. \\n\\n\\n* Minor concerns that do not have an impact on the score\\n - Although the problem setting is quite different, the idea of this paper is partially similar to the importance weighting technique adopted in some recent domain adaptation methods [R1, R2]. Do you have any comment on that?\\n\\n[R1] ``Reweighted adversarial adaptation network for unsupervised domain adaptation,\\\" CVPR2018 \\n[R2] ``Importance weighted adversarial nets for partial domain adaptation,\\\" CVPR2018\"}"
]
} |
rygUoeHKvB | Deep exploration by novelty-pursuit with maximum state entropy | [
"Zi-Niu Li",
"Xiong-Hui Chen",
"Yang Yu"
] | Efficient exploration is essential to reinforcement learning in huge state space. Recent approaches to address this issue include the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). In this paper, we disclose that goal-conditioned exploration behaviors in IMGEP can also maximize the state entropy, which bridges the IMGEP and the MSEE. From this connection, we propose a maximum entropy criterion for goal selection in goal-conditioned exploration, which results in the new exploration method novelty-pursuit. Novelty-pursuit performs the exploration in two stages: first, it selects a goal for the goal-conditioned exploration policy to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. We demonstrate the effectiveness of the proposed method in environments from simple maze environments, Mujoco tasks, to the long-horizon video game of SuperMarioBros. Experiment results show that the proposed method outperforms the state-of-the-art approaches that use curiosity-driven exploration. | [
"Exploration",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=rygUoeHKvB | https://openreview.net/forum?id=rygUoeHKvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7Ptr9lukkj",
"BylzmJ2For",
"BJlqkTsFsS",
"r1l1D2jtjH",
"BJezCosFsH",
"BJxjEFuqqr",
"S1xkntLaFB",
"HygSxD3hYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750767,
1573662490488,
1573661921616,
1573661783015,
1573661642073,
1572665651290,
1571805607147,
1571763949095
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2510/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2510/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2510/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2510/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"There is insufficient support to recommend accepting this paper. The reviewers unanimously recommended rejection, and did not change their recommendation after the author response period. The technical depth of the paper was criticized, as was the experimental evaluation. The review comments should help the authors strenghen this work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Revision\", \"comment\": \"We reivse our writing styles and languages. The updated the paper has the following main changes:\\n\\n(1) The connection between IMGEP and MSEE is made clear. We revise the languages in Section 3, simplify notations and update the proof in the Appendix.\\n\\n(2) We add more details about the exploitation policy in Section 4.3 to show the difference between ours and the Go-Explore's.\\n\\n(3) We update the state entropy results reported in Section 5.1 with 5 seeds. We also add Figure 9 in Appendix A.3 to demonstrate the effectiveness of approximate exploration boundary via prediction errors of RND. \\n\\n(4) We add trajectories visualization of SuperMarioBros-1-1 and SuperMarioBros-1-2 in Figure 10 in Appendix A.3. In particular, the result of SuperMarioBros-1-1 is used to illustrate that insufficient exploration leads to the local optimum.\"}",
"{\"title\": \"Replay to Review 2\", \"comment\": \"Thank you for your helpful review.\\nThanks for your thoughtful suggestions about the improvement of theoretical analysis. We do notice that our analysis ignores the influence of trajectories toward the goals. But we think that, in practice, the fluctuations of entropy introduced by a perfect goal-conditioned policy are less important. The increased part of exploration around the exploration boundary matters. We verify this conjecture in Section 5.1. But we will attempt to improve our theoretical analysis to consider the goal-conditioned trajectories in the future works. \\n\\nExploration behaviors of our method are similar to the Go-Explore\\u2019s. But we present practical methods. What\\u2019 more, the trade-off of exploration and exploitation is different from Go-Explore\\u2019s, which we discuss in Section 4.3. Importantly, we attempt to answer the question: why such defined exploration is efficient.\", \"vanilla_policy\": \"DDPG (with Gaussian action noise) on Fetch Reach and ACER (with policy entropy regularization) for others.\\n\\nWe have revised our writing styles according to your suggestions. We appreciate it if you can reconsider after reading the revised paper and the above responses.\"}",
"{\"title\": \"Replay to Review 1\", \"comment\": \"Thank you for your detail suggestions.\\n\\nQ1\\u3001Q2\\u3001Q3\\u3001Q6b\\u3001Q7\\u3001Q10: Writing styles \\n\\nThanks for your suggestion. We have revised our writing styles according to your guidance.\", \"q4\": \"Definition and notation\\n\\n$\\\\gamma$ = 0 is excluded since it beyond a standard reinforcement learning (i.e., the current decision considers future rewards). The origin definition of discounted state distribution is singular for $\\\\gamma$ = 1. But we remove this definition since we don\\u2019t use it later.\\n\\nQ5\\u3001Q6a: Figure 2\\n\\nIt is not a specific experimental design. It is just an illustration to explain the unexpected results if we try to maximize the empirical state distribution.\", \"q8\": \"Explanation for the statement\\n\\nWe find the original statement may be incorrect and misunderstanding, thus we rewrite it. We want to illustrate that the chance of discovering new stats is high when performing random actions around states with the least visitation counts rather than other states.\\n\\nQ9\\u3001Q10: Proof of Theorem 2\\n\\n$\\\\max_{e_t} H_{t+1}$ is the optimization problem of selecting which goals to visit can maximize the entropy. The choices of goals are represented with e_t (i.e., $e_t(i) = 1$ suggest visiting state i).\\n\\nWe have revised the proof of Theorem 2 to make it clear. We hope the modified version helps you.\", \"q11\": \"Motivation and method\\n\\nIn fact, we motivate the maximum state entropy exploration helps to find the (near-) optimal policy in reinforcement learning, and we present novelty-pursuit to maximize the state entropy. We assume a perfect goal-conditioned policy and visitation counts oracle in Section 3, but present the practical method in Section 4, verify our method in Section 5.1.\", \"q12\": \"Novelty of the proposed method\\n\\nWe study the problem of efficient exploration in reinforcement learning. Clearly, current reinforcement learning suffers from suboptimal due to inefficient exploration for environments with huge state space and long horizon. We disclose that goal-conditioned exploration behaviors can also maximize the state entropy, and demonstrate the exploration efficiency. We present the practical methods of such defined goal-conditioned exploration. We appreciate it if you can rethink our work after reading the revised paper and the above responses.\", \"q13\": \"The novelty of state-action pair matters\\n\\nWe currently only consider the case the state matters. It is known that $d_\\\\pi(s, a) = d_\\\\pi(s) * \\\\pi(a|s)$, where $d_\\\\pi(s, a)$ is the state-action distribution and $d_\\\\pi(s)$ is the state distribution. Thus, policy entropy may be helpful in the case of the novelty state-action pair matters. Actually, our method performs random actions around the exploration boundary, and it may be applicable to that case.\"}",
"{\"title\": \"Replay to Review 4\", \"comment\": \"Thank you for your helpful review.\", \"q1\": \"The connection between IMGEP and MSEE\\n\\nThe connection between IMGEP and MSEE is based on goal-conditioned behaviors. We select states with the least visitation counts as goals to maximize state entropy. We consider a perfect goal-conditioned policy and accurate visitation counts in Section 3, thus the practical method seems not to maximize the state entropy shown in Table 1. Note that, with planning-oracles (and a visitation oracle), the gap between our method and the maximum state entropy is only 0.124 (0.039).\", \"q2a\": \"The method of approximate visitation counts and exploration boundary\\n\\nFirst, the origin paper of RND (Burda et a;., 2019) validates that prediction errors given by RND are strongly correlated to the \\u201ccounts\\u201d of training samples on the MNIST dataset. The more samples of a certain class, the lower the prediction errors of that class. Considering we only need the order of states in terms of visitation counts rather the visitation counts itself, RND can meet our needs (See Figure 9 in Appendix 3 for the validation). As for exploration efficiency, we find that the performance improves a little with a visitation counts oracle (See Table 1).\", \"q2b\": \"The performance of the bonus method in Table 1\\n\\nThe performance of the bonus method does perform well than a random policy. We find that the original result based on 1 seed is unreliable and update the results of Table 1 with 5 seeds. We attribute the limited advantage of the bonus method to delayed and indirect feedback signals of the exploration bonus.\", \"q3\": \"Compared Baselines\", \"vanilla_policy\": \"DDPG (with Gaussian action noise) on Fetch Reach and ACER (with policy entropy regularization) for others.\", \"bonus_policy\": \"Off-policy version of RND (origin paper uses the on-policy version) based on vanilla policy. Note that we focus on exploration rather than policy optimization.\", \"other_baselines_and_tasks\": \"we think the SuperMarioBros is a better benchmark for deep exploration study and the bonus method is a strong baseline. we are conducting experiments with other baselines in atari.\", \"q4\": \"Reward Shaping\\nFirst, we do keep the goal unchanged during an episode. For fixed g, the d(ag_{T+1}, g) = 0 and d(ag_1, g_0) is constant. Thus, the optimal policy induced by reward shaping is invariant to the optimal policy induced by Eq. (2). We hope the revised verification in Appendix 2 helps you.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes novelty-pursuit for exploration in large state space. In theory, novelty-pursuit is motivated by connecting intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE), showing that exploring least visited state can increase state distribution entropy most. In practice, novelty-pursuit works in two stages: First, it selects a goal (with largest value prediction error) to train a goal reaching policy to reach the boundary of explored and unexplored states. Second, after reaching goal states, it uses a randomly policy to explore, hopefully can get to unexplored states. Experiments on Empty Room show that the novelty-pursuit with perfect goal reaching policy and visit count information can maximize state distribution entropy. Experiments on Empty Room, Four Rooms, FetchReach and SuperMarioBros show that the proposed method can achieve better performance than vanilla (policy gradient?) and bonus (exploration bonus using Random Network Distillation).\\n\\n1. The authors claim that the proposed method connects IMGEP and MSEE. However, the theory actually shows that a connection of visit count and MSEE (Thm. 2, choosing least visited state increases the state distribution entropy most.) Table 1 of Empty Room experiments shows the same, with visit count oracle the state entropy is nearly maximized. With goal exploration (entropy 5.35 and 5.47 in Table 1), the state entropy is not \\\"maximized\\\". I consider the theory part and Table 1 more a connection between visit count (including zero visit count, and least visited count) and MSEE, rather than IMGEP and MSEE.\\n\\n2. The argument of first choose non-visited state, then choose least visited state (Fig. 1) makes sense. However, the experiment design is just one way of approximately achieving this. I did not see why doing this approximation is good from both theoretical and empirical perspectives.\\n\\n2a) Random Network Distillation (RND) prediction error is used to select goals. After reaching these goals, it is claimed that the boundary of explored and unexplored states has been reached. However, RND just uses visit count as a high-level motivation, and there is no justification that high RND prediction error corresponds to low visit count. \\n\\nIn Table 1, it is surprising even in this simple environment, the entropy still looks not good with approximation. Maybe use larger networks. And why does bonus have the same entropy as random (does not make sense to me since RND should be a much stronger baseline than random policy)?\\n\\n2b) There exist other methods to approximate this boundary of visited/non-visited states (like pseudo-count as mentioned). Comparisons with other choices are needed (on simple tasks if others cannot be scaled up to SuperMarioBros) to claim that this approximation is a good choice.\\n\\n3. The experiments are lack of comparison with other exploration methods. There are only comparisons with vanilla (is it policy gradient?) and bonus (I suppose it is exactly the same method as in RND paper?), which is not enough to show the proposed method is on a good level. Also, experiments on more tasks (such as Atari) are needed to evaluate the performance of the purposed method.\\n\\n4. The reward shaping r(a g_t, g_t) in Eq. (2) is for a changing g_t. In Eq. (7), it seems to show cancelation of fixed g. I did not see why cancelation of fixed g in Eq. (7) can lead to the conclusion that Eq. (2) does not change optimal policies.\\n\\nOverall, I found this paper: 1) main idea (Fig. 1) makes sense; 2) the theoretical contribution is weak (the connection between visit count and entropy is not difficult to see). It does not connect IMGEP and MSEE, but connects visit count and entropy; 3) The experiments choose one way to approximately reaching boundary of visited and non-visited states, which is lack of comparison with other choices; 4) The experiments look promising, especially on SuperMarioBros, but more experiments on other tasks and comparisons with other exploration methods are needed to evaluate the proposed method thoroughly.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors study the problem of exploration in deep reinforcement learning. The authors borrow the ideas developed in the intrinsically motivated goal exploration process, and entropy maximization and propose a method on Noverly-Pursuit. The authors then empirically study the proposed method.\\n\\n1) The authors investigate an important problem and I would appreciate the authors if they could motivate its importance more in their work.\\n\\n2) In the second paragraph, the author mentioned that goal-conditioned exploration behaviours can maximize entropy. Later in the same paragraph, they claim that \\\"The exploration policy leads\\nto maximize the state entropy on the whole state distribution considering tabular MDP\\\". I guess the authors' point was this approach might increase the entropy rather than maximizing it. If the claim is, in fact, maximization, a reference would be helpful. If the authors prove it in this paper, implying it in this paragraph is also helpful. \\n\\n3) In the background section, the authors did not specify whether they provide background on tabular MDP or beyond that. By calling the transition kernel the state transition probabilities, it seems they introduced a tabular MDP, but a more concrete introduction and preliminaries would help to follow the paper.\\n\\n3) The first paragraph of section 2, the author mentioned that\\n\\\"The target of reinforcement learning is to maximize the expected discounted return\\\". I hope the authors mean\\\"one of the targets in the study of reinforcement learning ...\\\"\\n\\n4) in the same paragraph, why the \\\\gamma = 0 is excluded? is there any specific reason? Also, when the authors include the gamma = 1, do they make sure the maximization in line 9 of the same paragraph is well defined in regular cases?\\n\\n5) Regarding the experiment in figure 2. It would be useful to the readers if the authors provide more details about this experimental study.\\n\\n6) In the few paragraphs below Figure 2, it would be nicer if the authors provide a clear definition of each term. In order to follow the paper, I relied on my imperfect inference to infer the definitions. Also, I find it probably useful to distinguish the random variables and their realizations in the notation.\\n\\n6) Regarding the theorem1. I would recommend making the statement more transparent and more clear. I also recommend to even not calling it a theorem since it, as mentioned, is as clear as the definitions. Also, arent x_t(i)s non-negative by definition? \\n\\n7) In this sentence:\\n\\\"However, we don\\u2019t know what non-visited states are and\\nwhere non-visited states locate in practice since we can\\u2019t access ...\\\"\\nI think the authors' point was that \\\"we might not have access to it in general\\\".\\n\\n8) It would be helpful to me to evaluate this paper if the authors explain more how the following statements go through:\\n\\\"To deal with this problem, we assume that state density over the whole state space is continuous, thus visited states and non-visited states are close\\\". I am not sure how \\\"thus visited states and non-visited states are close\\\" follows from continuity of density and what is the notion of closeness. \\n\\n9-1) Theorem 2: while H seems to be a function of d_\\\\pi1:t(s), I am not sure how to interpret the argmax_{e_t}. A bit of help from the authors would be appreciated. \\n\\n9-2) Theorem 2: In the proof, I was not able to justify to my self the transition form g(xi; xj) = Hxi [d1:t+1] \\udbc0\\udc00 Hxj [d1:t+1] to the second line. Also, what is the definition of H_x_i?\\n\\n10) In equation 2, the authors use a notation d, I guess as distance. It would not only be helpful to define it but also would be helpful to use a different notation for distance and the d used on page 3, presumably for \\\"empirical state distribution\\\". \\n\\n11) At the beginning of the paper, the authors motivated the maximum entropy but the final algorithm is based on other approaches. \\n\\n12) Despite the fact that I could not find this paper ready enough and well-posed, I also have a concern about the novelty of the approach. I think it is not novel enough for publication at ICLR, but I am open to reading other reviewers', as well as commenters', and more especially the authors' rebuttal response.\\n\\n13) I also encourage the authors to provide a discussion on the cases where the novelty (whatever that could mean) does not matter, rather the novelty of state-action pair matters.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"*Summary*\\nThe paper addresses the challenge of intrinsically-driven exploration in tasks with sparse or delayed rewards. First, the authors try to bridge the gap between the objectives of intrinsically-motivated goal generation and maximum state entropy exploration. Then, they propose a new exploration method, called novelty-pursuit, that prescribes the following receipt: first, reach the exploration boundary through a goal-conditioned policy, then take random actions to explore novel states. Finally, the authors compare their approach to a curiosity-driven method based on Random Network Distillation in a wide range of experiments: from toy domains to continuous control, to hard-exploration video games.\\n\\nI think that the paper displays some appealing empirical and methodological contributions, but it is not sufficiently theoretically grounded. For this reason, I would vote for rejection. I would advise the authors to rephrase their work as a primarily empirical contribution, in order to emphasize the merits of their method over a lacking theoretical analysis.\\n\\n*Detailed Comments*\\n\\n*Major Concern*\\nMy major concern is about the claim that goal-conditioned exploration towards the least visited state would, at the same time, maximize the entropy of the state distribution. The derivations seem technically sound, but I think that the underlying assumption is unreasonable in this context: it neglects the influence of the trajectory to reach the target state, which is rather crucial in reinforcement learning instead. It is quite easy to design a counter-example in which the (optimal) goal-conditioned policy towards the least visited state actually decreases the overall entropy of the state distribution. One could avoid the issue by assuming to have access to a generative model over the states, but that would fairly limit the applicability of the approach.\\n\\n*Other Concerns and Typos*\\n- I think that the authors minimize the relation between their methodology and the one proposed in (Ecoffet et al., 2019). It is true that the applicability of Go-Explore is quite limited. However, the idea behind their approach, which is based on first reaching an already visited state and then exploring randomly from that state, is not all dissimilar from the two-phase exploration scheme of novelty-pursuit.\\n- It is not completely clear to me how the disentanglement between exploration and exploitation works in the novelty-pursuit algorithm.\\n- What is the vanilla policy considered in the experiments?\\n- Section 4.2, after equation 3: rewarding shaping -> reward shaping\\n- section 5.4: we consider the SuperMarioBros environments, which is very hard ecc. -> we consider the SuperMarioBros environments, in which it is very hard ecc.\"}"
]
} |
SJxUjlBtwB | Reconstructing continuous distributions of 3D protein structure from cryo-EM images | [
"Ellen D. Zhong",
"Tristan Bepler",
"Joseph H. Davis",
"Bonnie Berger"
] | Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from $10^{4-7}$ noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images. | [
"generative models",
"proteins",
"3D reconstruction",
"cryo-EM"
] | Accept (Spotlight) | https://openreview.net/pdf?id=SJxUjlBtwB | https://openreview.net/forum?id=SJxUjlBtwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"V_emK_MVx",
"HT4uxwRHzE",
"Mz8vg_BjQn",
"B1ewkdOijr",
"HkxVYD_jiH",
"HJgXXPdsjr",
"rJetvQrJcB",
"HyxRW6rTFr",
"S1lOMkH6FH"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1582260942957,
1582103879023,
1576798750739,
1573779422750,
1573779323525,
1573779226842,
1571930976865,
1571802373917,
1571798800165
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2509/Authors"
],
[
"~Jaejun_Yoo1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2509/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2509/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2509/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2509/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Software release\", \"comment\": \"Thanks for inquiring. We are working on a software package for non-expert users, and hope to release as soon as possible. Feel free to contact us directly if you\\u2019re interested in early access to the codebase.\"}",
"{\"title\": \"About the source code release\", \"comment\": \"Could you give us an approximate date of the code release?\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper introduces a generative approach to reconstruct 3D images for cryo-electron microscopy (cryo-EM).\\n\\nAll reviewers really liked the paper, appreciate the challenging problem tackled and the proposed solution.\\n\\nAcceptance is therefore recommended.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your comments and questions. Classical cryo-EM reconstruction algorithms (e.g. cryoSPARC) are described in Section 2.2 at a high level and we refer the reader to its reference (Punjani et al. 2017) for more details on their implementation.\\n\\nTo clarify the relationship between the cryoSPARC and cryoDRGN heterogeneous reconstruction in Figure 4, CryoSPARC imposes a discrete model for heterogeneity, specifically a mixture model of K volumes. The cryoSPARC results in Figure 4 are the volumes and the distribution of images over the 3 clusters from their unsupervised reconstruction. In contrast, the continuous latent variable from cryoDRGN unsupervised reconstruction is able to reconstruct the continuous motion of the ground truth volume. We have clarified the text to reduce any confusion and added training times for these methods to the appendix. Thank you for the recommendations!\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"1. Thank you for your comments and thank you in particular for pointing us to a reference we missed, which we have added to the manuscript.\\n\\nUllrich et al. introduce some of the same foundational building blocks for applying differentiable models to the cryoEM reconstruction task. In particular, they propose a differentiable voxel-based representation for the volume and introduce a variational inference algorithm for learning the volume through gradient-based optimization.\\n\\nDue to their voxel-based representation, they introduce a method to differentiate through the 2D projection operator. In contrast, we parametrically learn a continuous function for volume via a coordinate-based MLP, which seamlessly allows differentiation through the slicing and rotation operators without having to deal with discretization. \\n\\nTheir method is able to learn a homogeneous volume with given poses, whereas we perform fully unsupervised reconstruction of heterogeneous volumes. They show empirical experiments that highlight many of the challenges for variational inference of these models. In particular, inference of the unknown pose is challenging with gradient-based optimization and contains many local minima (their Fig 6), which we address with a branch and bound algorithm.\\n\\nWe report a Fourier Shell Correlation (FSC) metric, which is a commonly used resolution metric in the cryoEM field. Voxel-wise MSE is not typically used in the cryoEM literature as it is sensitive to background subtraction and data normalization. We have added training times for these methods to the SI.\\n \\n2. The normalization constant in Eq. 3 is the partition function over all possible values of the latent pose and volume. Instead of computing this (intractable) constant, coordinate ascent on the dataset log likelihood is used to refine estimates of pose and volume in traditional algorithms.\\n\\n3. The extent of the 3D space is determined by the dataset\\u2019s image size and resolution. We define a lengthscale such that image coordinates are modeled on a fixed lattice spanning [-0.5, 0.5]^2 with grid resolution determined by the image size. The absolute spatial extent is thus determined by the Angstrom/pixel ratio for each dataset. Similarly, final volumes for a given value of the latent are generated by evaluating a 3D lattice with extent [-0.5,0.5]^3 with grid resolution determined by the dataset image size. We have added the absolute spatial extent to the description of each dataset in the revised manuscript.\\n\\n4. We have included additional architectural details in the revised manuscript, and we will be releasing the source code which will hopefully further clarify the architecture.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your comments and questions. We have updated the manuscript to clarify these questions.\\n1) The VAE is hypothesized to produce blurry images when the inference/generative models are not sufficiently expressive for the data modeling task, and in particular due to the typical choice of MSE loss (i.e. Gaussian error model), thus blurring sharp edges in complex natural image data [1,2,3]. In the case of cryo-EM, the high noise in the images is typically assumed to be Gaussian and therefore using the MSE loss has a denoising effect. In our experiments, we were able to achieve resolutions up to the ground truth resolution or matching published structures with our architecture and training settings, though we agree with the reviewer that exploring alternative generative models is a promising future direction.\\n\\n[1] https://arxiv.org/abs/1611.02731\\n[2] https://openreview.net/pdf?id=B1ElR4cgg\\n[3] https://arxiv.org/pdf/1702.08658.pdf\\n\\n2) We observed accurate reconstructions as long as the dimension exceeded the dimension of the underlying data manifold and faster training with higher dimensional latent variables. We have added these results to the appendix in the revised manuscript.\\n3) We varied the number of classes for comparison against SOTA discrete multiclass reconstruction and selected 3 classes which had the lowest error for our comparison in Table 2. We have added these results to the appendix in the revised manuscript.\\n4) Our coordinate-based neural network model for volumes provides a general framework for modeling extrinsic orientational changes in a differentiable manner. This work could be applied in other domains of scientific imaging such as reconstruction of tomograms or CT scans.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"- The authors proposed a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. To address intrinsic protein structural heterogeneity, they explicitly model the imaging operation to disentangle the orientation of the molecule by formulating decoder as a function of Cartesian coordinates.\\n\\n- The problem and the approach are well motivated. \\n\\n- This reviewer has the following comments:\\n1) VAE is known to generate blurred images. Thus, based on this approach, the reconstruction image may not be optimal with respect to the resolution which might be critical for cryo-EM reconstruction. What's your opinion?\\n2) What's the relationship between reconstructed performance, heterogeneity of the sample and dimensions of latent space?\\n3) It would be interesting to show any relationship, reconstruction error with respect to the number of discrete multiclass. \\n4) How is the proposed method generalizable?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"~The authors build a new method to recapitulate the 3D structure of a biomolecule from cryo-EM images that allows for flexibility in the reconstructed volume.~\\n\\nI thought this paper is very well written and tackles a difficult project.\", \"there_is_a_previous_work_that_these_authors_should_cite\": \"Ullrich, K., Berg, R.V.D., Brubaker, M., Fleet, D. and Welling, M., 2019. Differentiable probabilistic models of scientific imaging with the Fourier slice theorem. arXiv preprint arXiv:1906.07582.\\n\\nHow does your method compare to this paper? In Ullrich et al., they report \\u201cTime until convergence, MSE [10^-3/voxel], and Resolution [Angstrom]). I think these statistics would be useful to report in your work, as they are more familiar with folks in the cryoEM field.\\n\\nIn Equation 3, how does one calculate Z, the normalization constant?\\n\\nFor the decoder, how large of the 3D space are you generating? What are the units? Are you using voxels to represent atomic density? What is the voxel size? Is it the same as on Page 11?\\n\\nI think more description of the neural network architecture would be useful (more than what is reported on page 12).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors introduce cryoDRGN, a VAE neural network architecture to reconstruct 3D protein structure from 2D cryo-EM images.\\n\\nThe paper offers for a good read and diagrams are informative.\\n\\nBelow are comments for improvement and clarification.\\n\\n> Consider explaining cryoSPARC in detail given that is the state-of-the-art technique and to which all the cryoDGRN results are compared.\\n\\n> In Figure 4 and the related experiment, how are a) the cryoSPARK volumes related to cryoDRGN volumes, b) what do the clusters mean in cryoSPARK and how do they compare with the corresponding outputs of cryoDRGN\\n\\n> What would runtime comparisons be for cryoSPARK and cryoDGRN, for an unsupervised heteregeneous reconstruction?\"}"
]
} |
S1eSoeSYwr | Deep Evidential Uncertainty | [
"Alexander Amini",
"Wilko Schwarting",
"Ava Soleimany",
"Daniela Rus"
] | Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial. While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions. In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target. We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution. We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output. Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters. We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations.
| [
"Evidential deep learning",
"Uncertainty estimation",
"Epistemic uncertainty"
] | Reject | https://openreview.net/pdf?id=S1eSoeSYwr | https://openreview.net/forum?id=S1eSoeSYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"78WcDK50Bi",
"rke4jm93jB",
"rkxhKY_3sB",
"rkgnfCIhjr",
"BJeSI6NnjB",
"Byxa12f3oH",
"HklTztaioB",
"r1lApDpoiS",
"rklwFlL9sr",
"Bkeg_7hvjB",
"H1x799HvoH",
"HyliHcHDoH",
"r1xdJ8HPir",
"HJlBIefWcB",
"ryxl-cgTYB",
"HklhT8o3KH",
"S1xHbOpZ_r",
"HkxWlmF5wH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"comment"
],
"note_created": [
1576798750709,
1573852059816,
1573845379723,
1573838356341,
1573829964634,
1573821412919,
1573800213391,
1573799877801,
1573703807421,
1573532519566,
1573505674557,
1573505603511,
1573504480343,
1572048973328,
1571781111520,
1571759812252,
1569998845444,
1569522408949
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2508/AnonReviewer1"
],
[
"~Andrey_Malinin1"
],
[
"~Pranav_Poduval1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a method for providing uncertainty for deep learning regressors through assigning a notion of evidence to the predictions. This is done by putting priors on the parameters of the Gaussian outputs of the model and estimating these via an empirical Bayes-like optimization. The reviewers in general found the methodology sensible although incremental in light of Sensoy et al. and Malinin & Gales but found the experiments thorough. A comment on the paper pointed out that the approach was very similar to something presented in the thesis of Malinin (it seems unfair to expect the authors to have been aware of this, but the thesis should be cited and not just the paper which is a different contribution). In discussion, one reviewer raised their score from weak reject to weak accept but the highest scoring reviewer explicitly was not willing to champion the paper and raise their score to accept. Thus the recommendation here is to reject. Taking the reviewer feedback into account, incorporating the proposed changes and adding more careful treatment of related work would make this a much stronger submission to a future conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the new quantitative results. I do not have further questions.\"}",
"{\"title\": \"Highlighting the evidential regularizer\", \"comment\": \"Thank you very much for recognizing this improvement of Section 3! \\n\\nFollowing your suggestion, we have just uploaded another revision which integrates our evidence regularizer as an explicit contribution in Section 1, as well as a new quantitative ablation experiment (Sec 7.3.2) to demonstrate the need to use this loss during training. \\n\\nPlease let us know if you have any remaining comments which you would like to see addressed in a revised version.\"}",
"{\"title\": \"Further comments\", \"comment\": \"Thanks for the additional explanations. Section 3 in the current version is much cleaner. The authors might want to highlight the loss part in the contributions summary in Section 1 and supporting quantitative results showcasing the importance of including this evidence regularizer.\"}",
"{\"title\": \"SOS vs NLL and uncertainty increases\", \"comment\": \"Thank you for your followup questions! \\n\\nFrom a modeling perspective, the difference between the two losses can be seen by observing Eq. 14 (NLL) and 21 (SOS). Note that on Eq. 21 the inner integral over the data, y, has been evaluated to allow for a more direct comparison to NLL. We would like to draw attention to the fact that these two equations follow the same form of: \\n\\n\\\\int_\\\\theta [ X * p(theta|m) ] d\\\\theta\\n\\nwhere X is the only difference between the two equations. In the NLL case, X, this is simply p(y|theta), which naturally is the likelihood function of the data. However, in the SOS case, X interestingly becomes: \\n\\n(y - mu)^2 + sigma^2\\n\\nWhich places an L2 penalty on the data (from the mean), while also trying to drive the aleatoric uncertainty (sigma) to zero. Therefore, the SOS X loss term shown here is optimizing directly on the error of the parameters whereas the NLL X term is trying to maximize the likelihood of the data given the distribution. We hope this explicit term helps bring light to the modelling difference between our two losses, please let us know if there is anything else we can clarify on this front. \\n\\nWe would like to now turn to your second question on our new OOD example experiment. For a general comment reader, the Figure we will be discussing is Fig. 6. You are certainly correct that epistemic uncertainty increases also on the in-distribution samples we show. In fact, this was exactly what we were trying to demonstrate by sorting the images by uncertainty from left-to-right. We recognize that even when in-distribution there are certain samples which are clearly harder than others and can lead to a greater chance of error. The same can be said for our out-of-distribution examples (i.e. the dog is far more out-of-distribution than the rubble ground). We focus on in-distribution test samples in this response as the reviewer\\u2019s question is directly asking about those. Looking at the columns from left-to-right in this figure we can observe that: \\n\\n- column 1: is relatively straightforward, a large part of the image is a well-illuminated ground plane, the corners and edges of nearly all objects are clearly shown\\n- column 2: more challenging than column 1, very poor illumination which can certainly cause an increase in epistemic uncertainty; \\n- column 3: might seem simple at first, however there is a large unknown object placed on the far right edge of the image that is very difficult to gain a depth understanding of because on a small portion is shown. The lamp is also fairly infrequently seen in our dataset causing some added uncertainty; \\n- column 4: presents many small objects scattered on top of the kitchen counter, each of these objects is very hard to accurately see and infer depth from; \\n- column 5: the most challenging in-distribution example that we see. There are many challenges we see with this example, the most obvious being the giant mirror that takes up a significant part of the image. Mirrors are very challenging from both the epistemic and aleatoric point of view (as we show in Fig. 5). Like previous columns there are also many smaller objects that could be attributed to the increase in epistemic uncertainty. \\n\\nTo directly answer your question, yes we did observe that the uncertainty of in-distribution test images varied; however, we would characterize the changes as generally smooth with very heavy tails which we do not visualize here (in general there were no large jumps outside of the tails). The jump between the first and second column is potentially entering the lower tail region, but we felt it was important to visualize for the reader at least one example where the uncertainty is lower than the remainder of the in-distribution images and with clear semantic reason (well-lit area, image of standard objects, etc). On the other hand, the change between in-distribution and OOD was not nearly as smooth. There was a clear jump between the most certain OOD example and the most uncertain in-distribution example like the reviewer mentions. The gravel example presents many similarities to our dataset as it contains a main ground plane and depth increases into the horizon; however, we see there is still a large jump between this example and our in-distribution samples.\"}",
"{\"title\": \"Further comments\", \"comment\": \"Thank you for your additional explanations.\\n\\nI would like to follow up on some of my comments, specifically the other comment 4. I'm still confused about what this means from a *modelling* perspective. Is there any significantly different behavior expected from either of the methods? The question lies more on the interpretation side.\\n\\nI had also a question on the added part on OOD examples. You show that the uncertainty increases as you present OOD examples, but I notice that within the testing set, there is also a great variability in uncertainties (e.g., the rightmost in distribution example has 4.5 times more error compared to the left-most one). Did you observe a sharp threshold in uncertainty increase by presenting OOD examples (e.g. nothing below 10^{-3}) or does it vary smoothly as you present more and more challenging/clearly OOD examples?\"}",
"{\"title\": \"Summary and general response\", \"comment\": \"We would like to thank all reviewers for their very thoughtful feedback on our work. We have uploaded a final revision which, we believe, incorporates all of your suggestions and clarifies the questions presented. In summary, our work presents the following contributions:\\n\\n1. A method for learning evidential distributions by placing priors on our likelihood to model the uncertainty of a regression network (applicable to a wide range of problems in depth estimation, forecasting, or robotic control);\\n2. A novel loss function for inflating uncertainty when mistakes are made during training, thus enforcing an uncertain prior during the learning process; \\n3. Demonstration that our model is scalable to extremely high dimensional output spaces (i.e. images) and robust to detecting out-of-distribution data as well as adversarially perturbed examples; and\\n4. Performance analysis against existing epistemic and aleatoric estimation baselines, demonstrating that our method achieves comparable or better predictive performance while being faster (no sampling) and more memory compact (no ensembles, or weight parameterization). \\n\\n\\nThrough our revisions we would like to summarize a key list of changes which we have incorporated (relevant reviewer prepended for each point): \\n\\n- [R1,R3] Stronger connection from the Prior Network formulation and a description of the challenges that regression problems present over classification, thus motivating our novel loss function (evidence regularizer). \\n- [R2] A careful introduction of the evidential terms which we defined. We have added introductions to the terms \\u201ctotal evidence\\u201d and \\u201cmodel evidence\\u201d and provide references to support their use. \\n- [R3] Analysis of the predicted aleatoric uncertainty in addition to the epistemic uncertainty (Fig. 5 & 8). \\n- [R3] New experiments with greater number of samples of baselines to demonstrate the superior predictive capacity and performance benefits of our approach (Tab. 3, augmenting Tab. 2)\\n- [R1] New experiments on out-of-distribution samples (Fig. 6), demonstrating applicability to detect OOD samples besides those of which have been adversarially perturbed.\\n- [R3] Ablation experiment on the evidential regularizer loss to demonstrate the importance of this contribution (Sec. 7.3.2).\"}",
"{\"title\": \"Clarification followup\", \"comment\": \"Thank you so much for following up with this question! We completely agree with your assessment that our work follows the Prior Network framework naturally and have updated Sec. 3 in our latest revision to take this into account. Please also refer to our latest comment to all reviewers for a summary of these changes.\\n\\nRegarding the unavailability of ground truth likelihood data in regression problems, we hope to address your question here (as well as in the now revised Sec. 3). If you refer to the related work on evidential priors for classification [1, 2] the uncertainty is inflated by minimizing the KL between a modified version of the inferred distribution and an uncertainty prior distribution. However, the point we would like to draw attention to is how the inferred distribution is modified. In both approaches, the authors require the ground truth likelihood of the target to redistribute the density of the posterior before minimizing the KL. This way, evidence is reduced only on the classes which the sample was not part of. In classification, the ground truth labels are samples of the categorical likelihood, so we know the full and bounded set of classes that should lower their evidence. \\n\\n[side note] In [1] this requirement is described on page 6, and the modified distribution parameters are denoted as \\\\tilde{\\\\alpha}_i. In [2], this requirement is described on page 6, Eq. 12 and 13. \\n\\nIn the regression setting, this optimization algorithm would require knowledge of the ground truth likelihood (\\\\mu, \\\\sigma) from which the datapoint, y, was sampled. This is because we cannot reduce evidence everywhere except our single point estimate as this space is infinite and unbounded. One alternative could be to leverage the estimated likelihood (which we can obtain from the NIG) to use instead. However, we found that doing so does not work in practice (validated in both our regression setting as well as the prior works classification setting). \\n\\nTherefore, to overcome this limitation, we believe one of our main contributions was a novel loss function for expressing uncertainty on mistakes that can be applied to the regression setting. We hope that this explanation as well as the added revision to Sec 3 (specifically 3.2.2) helps clarify our the novelty of our optimization method. \\n\\n[1] M. Sensoy, et al. \\\"Evidential deep learning to quantify classification uncertainty.\\\" NeurIPS. 2018.\\n[2] A. Malinin, et al. Predictive uncertainty estimation via prior networks. NeurIPS 2018.\"}",
"{\"title\": \"Further comments\", \"comment\": \"Thanks for the detailed response and the additional quantitative results on aleatoric uncertainty estimation.\\n\\nBut I do not quite follow the part when the authors argued that \\\"In regression problems, there unfortunately is no analog as labels directly represent the target with no likelihood association. Our work contributes a solution to handle such regression problems despite classical evidential optimization techniques not being applicable.\\\" Does this refer to the Gaussian likelihood of the regression target used in the manuscript? Why \\\"no likelihood association\\\"? The proposed work follows the Prior Network framework naturally with a different instantiation in the regression regime. Also, still not quite follow how the variational Bayesian comes into play in Section 3.1 and 3.2 in the revised version.\"}",
"{\"title\": \"Out-of-distribution test samples\", \"comment\": \"Thank you for your positive feedback! We\\u2019ve included a reference to your NeurIPS paper in our latest revision as relevant prior art.\\n\\nTo answer your question regarding OOD data, we present experiments evaluating epistemic uncertainty on unseen in-distribution and OOD data (Fig. 6) as well as on the extreme case of adversarially perturbed inputs (Fig. 7) \\u2014 all of which were not part of the training distribution either.\"}",
"{\"title\": \"Response to AnnonReviewer1\", \"comment\": \"We would like to thank the reviewer for their positive and detailed review of our paper as well as their suggestions on improving the work\\u2019s exposition. We have incorporated and built on many of your comments and suggestions in our latest revision and are currently working to incorporate the remainder in our next revision.\\n\\nSpecifically, we have clarified the definitions of evidence and provided additional experiments on how this evidence is used to evaluate aleatoric in addition to epistemic uncertainties. Regarding the KL uncertainty loss term, we are actively working to strengthen the text and describe how our formulation relates to the original ELBO formulation as they both aim to inflate uncertainty of the posterior. The reviewers suggestions on this are extremely helpful and will be included in the next revision. Our uncertainty inflation loss function also presents a key contribution of this work compared to related works in the classification domain. While in classification, there is a ground truth (and known) likelihood distribution for every sample, in regression the underlying likelihood distribution is not known a priori, thereby motivating our loss function.\", \"other_comments\": \"1. We have added additional clarification in the text. A higher order distribution is one which, when sampled from, yields a distribution over the data. For example, sampling from a Normal Inverse-Gamma distribution yields the parameters (mean, variance) of a Gaussian distribution over the data. Another interpretation stems from conjugate priors in Bayesian probability theory. For instance, the Dirichlet distribution is the conjugate prior of the categorical distribution, which is more in line with the interpretation taken in [1].\\n\\n2. Thank you, we\\u2019ve corrected this. \\n\\n3. The assumption that the posterior can be factorized over the model parameters follows directly from the mean-field approximation [2] used in mean-field variational Bayes which factorizes over the the model\\u2019s latent variables. In the context of evidential distributions, the latent variables are exactly the parameters defining the lower-order distribution (in this case a Gaussian with mean, \\\\mu, and variance, \\\\sigma^2). \\n\\n4. Excellent question. L_NLL approaches the optimization problem from the lens of an empirical Bayes formulation, where the objective is to *maximize* model evidence, p(y|m). On the other hand, L_SOS aims to *minimize* the sum of squared errors between the evidential prior and the data that would be sampled from the associated likelihood function. We have clarified these points in our updated manuscript.\\n\\n5 & 6. Thank you for this constructive comment, we are currently working on incorporating your advice as well as the similar advice of Reviewer 3 to improve the exposition of Sec. 3. \\n\\n7. We appreciate the reviewer for pointing out this confusion. We are actively working on incorporating an improved explanation and a step by step algorithm on computing these ROC curves in our next revision.\\n\\n8. Thank you, we have corrected these typos in the revised version.\\n\\n\\n[1] A. Malinin and M. Gales. Predictive uncertainty estimation via prior networks. NeurIPS 2018.\\n[2] G. Parisi. Statistical field theory. Addison-Wesley, 1988.\"}",
"{\"title\": \"Response to AnonReviewer 3\", \"comment\": \"We are grateful for the very detailed and thorough review of our paper, and thank the reviewer for their constructive feedback and prior work references. We would like to clarify several points on novelty and our contribution. While prior works in the classification domain require a ground truth likelihood function over the data, our work, to the best of our knowledge, represents the first demonstration of how epistemic uncertainty can be learned without this information. Thus, our approach enables application to the wider range of regression problems (e.g., depth estimation, forecasting, robotic control learning, etc) which directly map data to targets without a known likelihood function.\\n\\nSpecifically, while the use of a N.I.G. prior is an expected extension from the Dirichlet prior used in the classification regime, there are several challenges in inflating the model\\u2019s prior epistemic uncertainty that are specific to regression learning problems. Namely: \\n\\n1. To effectively model epistemic uncertainty a regularization loss is needed to minimize divergence to an \\u201cuncertain\\u201d distribution. In classification [1, 2], this is somewhat trivial and is done by minimizing the KL-divergence from the inferred posterior to a uniform Dirichlet. In the regression domain, a uniform prior is not well defined. A simple univariate example to demonstrate this is that the KL-divergence between any inferred Gaussian and a Gaussian with infinitely large variance is always infinite, regardless of the inferred Gaussian. Therefore, simply inflating uncertainty by minimizing a direct KL loss will not achieve the desired results in regression. \\n\\n2. Furthermore both [1, 2] require the inferred distribution to be redistributed and remove the non-misleading evidence. This requires a priori knowledge of the ground truth likelihood function of the data. In classification this is straightforward as the data labels are typically provided as one-hot encodings which directly represents the likelihood function. In regression problems, there unfortunately is no analog as labels directly represent the target with no likelihood association. Our work contributes a solution to handle such regression problems despite classical evidential optimization techniques not being applicable. Furthermore, we acknowledge that inflating the regression uncertainties still presents many open research questions, in addition to the solution presented in this paper.\\n\\nTherefore, the fundamental learning approaches from the classification domain [1, 2], even after adapting to the context of a N.I.G. prior, are not applicable to regression, thus necessitating the approach presented in this work. We are actively working to improve the exposition of these ideas as well as the introduction and motivation of the \\u201cI don\\u2019t know\\u201d loss term in our next revision of Sec. 3. \\n\\nWe would like to especially thank the reviewer for comments on lack of comparisons to aleatoric uncertainty estimation (in addition to epistemic). As a result, we have conducted numerous additional experiments on synthetic datasets (with known aleatoric uncertainty) as well as on depth dataset. We present our results for both of these experiments and compared to other methods (i.e. [3]) in Figures 5 and 8.\", \"other_comments\": \"1. \\\\pi = 3.1415. \\\\Gamma(.) = Gamma function [4]. \\n\\n2. We appreciate the reviewer for pointing out this point of confusion in our formulation. We have clarified the choice of p in the text (please refer to pg. 5)\\n\\n3. Thank you for raising this point. The output label in our context was the inverse depth (i.e. scaled disparity) which caused the relative differences in error. It is often preferred [5, 6, 7], from an optimization point of view, to predict disparity as opposed to depth as it is more numerically stable (far objects, such as the sky, have an extremely large depth but disparity of zero). \\n\\n4. We use a higher number of samples in the accuracy experiments presented in Table 1. On the other hand, Table 2 aimed to compare compute efficiency (not accuracy), so to give sampling techniques an added benefit, we report runtime and memory requirements on a smaller number of samples. We have also added experiments (Table 3) with greater number of samples to address the reviewer\\u2019s suggestion. \\n\\n[1] M. Sensoy, et al. \\\"Evidential deep learning to quantify classification uncertainty.\\\" NeurIPS. 2018.\\n[2] A. Malinin, et al. Predictive uncertainty estimation via prior networks. NeurIPS 2018.\\n[3] A. Kendall, et al. \\\"What uncertainties do we need in bayesian deep learning for computer vision?.\\\" NeurIPS. 2017.\\n[4] Gamma Function. Wolfram Alpha. http://mathworld.wolfram.com/GammaFunction.html\\n[5] C. Godard, et al. \\\"Unsupervised monocular depth estimation with left-right consistency.\\\" CVPR. 2017.\\n[6] A. Kendall, et al. \\\"Multi-task learning using uncertainty to weigh losses for scene geometry and semantics.\\\" CVPR. 2018.\\n[7] R. Atienza. \\\"Fast Disparity Estimation using Dense Networks.\\\" ICRA. 2018.\"}",
"{\"title\": \"Response to AnonReviewer 2\", \"comment\": \"We would like to thank the reviewer for their positive feedback and constructive comments. Your comments and suggestions have helped us improve the exposition of our work in the latest revised submission. Additionally, we would like to address the weaknesses noted by the reviewer:\\n\\n1. We clarify that \\u201cmodel evidence\\u201d [1] (also known as marginalized likelihood) is a term from Bayesian inference that describes the distribution of the observed data marginalized over the model parameters. On the other hand, \\u201ctotal evidence\\u201d arises directly from the learned conjugate prior (evidential) distribution [2]. We have clarified both of these points in the text and provided a reference for virtual observations of conjugate prior distributions [2] in support of our total evidence definition. \\n\\n2. We appreciate the reviewer\\u2019s suggestion and are working to address this point in the text in our next revision as it closely relates to similar suggestions made by Reviewers 1 and 3.\\n\\n\\n[1] D. MacKay. \\\"Bayesian model comparison and backprop nets.\\\" Advances in neural information processing systems. 1992.\\n[2] M. Jordan. \\\"The exponential family: Conjugate priors.\\\" (2009).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates the aleatoric uncertainty and epistemic uncertainty in machine learning. The evaluation was performed on benchmark regression tasks. The comparison with other state-of-the-art methods was provided. The evaluation of the robustness against out of distribution and adversarially perturbed test data was performed.\", \"strength\": \"1. Experiments were complete. Analyses were provided with useful information.\\n2. A model with smaller number of parameters was proposed.\\n3. Computation efficiency was improved.\", \"weakness\": \"1. Total evidence and model evidence were defined. The derivation of these evidences should be clarified.\\n2. Theoretical justification for related methods could be improved.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed deep evidential regression, a method for training neural networks to not only estimate the output but also the associated evidence in support of that output. The main idea follows the evidential deep learning work proposed in (Sensoy et al., 2018) extending it from the classification regime to the regression regime, by placing evidential priors over the Gaussian likelihood function and performing the type-II maximum likelihood estimation similar to the empirical Bayes method [1,2]. The authors demonstrated that the both the epistemic and aleatoric uncertainties could be estimated in one forward pass under the proposed framework without resorting to multiple passes and showed favorable uncertainty comparing to existing methods. Robustness against out of distribution and adversarially perturbed data is illustrated as well.\\n\\nOn the technical side, the novelty is incremental. The extension from the classification regime to the regression regime, from the conjugate Dirichet prior to the conjugate Normal-Inverse-Gamma prior, is quite straightforward. Besides, the presentation of the paper could be largely improved. It is not easy to follow the derivation in Section 3. The discussion of concepts and problem definitions look fragmented and incoherent. Even though the presentation largely follows (Sensoy et al., 2018) and uses terms from theory of evidence, the derivation actually is more aligned with the prior network [3] under the Bayesian framework which is missing from the references. It is really confusing that the authors talked about the variational inference when conjugate prior is used, and it is unclear how the variational distributions are used in Section 3.2 or how the \\\"I don't know\\\" loss term relates to the KL-divergence between the variational distribution and the prior in Section 3.3. This term was manually added as additional regularization to \\\"prefer the evidence to shrink to zero for a sample if it cannot be correctly classified\\\" in (Sensoy et al., 2018), and a different regularization was used to encourage distributional uncertainty in [3]. I hope that the authors could spend more efforts clarifying their ideas, especially the derivations in Section 3.2 and 3.3.\\n\\nOn the other hand, there is no referring to the input x in the entire derivation and problem formulation in Section 3. It took me a while to realize that the formulation in (4) actually defines the generation for a particular input, not for all the inputs. That is, the model is trying to model heteroscedastic uncertainty, not the homoscedastic counterpart. It could be better to call out the dependence on the input explicitly. \\n\\nOn the quantitative side, the baseline models considered in Section 4 are mainly concerned with epistemic uncertainty estimation. So it would be good to explicitly discuss which uncertainty estimation was compared with. This work estimates both aleatoric and epistemic uncertainties, so a better comparison is to models that estimate both quantities (Kendall & Gal, 2017)[4] which has been shown to give better output estimation comparing to epistemic uncertainty estimation only (Kendall & Gal, 2017).\", \"other_comments\": \"- What is the \\\\pi in equation (8)?\\n- The \\\"I don't know\\\" loss introduced in Section 3.3 used L-p norm. What is the originality of the L-p norm here? In practice, which p value should be used? In the experiments, which p value was used?\\n- The RMSE results of the depth estimation presented in Table 2 are orders of magnitude smaller than those from existing work, for example Table 2(b) in (Kendall & Gal, 2017). Was a different RMSE computation used in this work?\\n- From the caption in Table 2, it seems that only 5 samples were used in MC-dropout, which is considerably smaller than those used in existing work (Kendall & Gal, 2017).\\n\\n[1] D.J.C. MacKay. Hyperparameters: optimize, or integrate out? Maximum Entropy and Bayesian Methods, Springer 1996.\\n[2] B. Efron. Two modeling strategies for empirical Bayes estimation. Statistical Science, 2014.\\n[3] A. Malinin and M. Gales. Predictive uncertainty estimation via prior networks. NeurIPS 2018.\\n[4] Y. Kwon, J.-H. Won, B.J. Kim, and M.C. Paik. Uncertainty quantification using Bayesian neural networks in classification: application to ischemic stroke lesion segmentation. MIDL 2018.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel approach to estimate the confidence of predictions in a regression setting. The approach starts from the standard modelling assuming iid samples from a Gaussian distribution with unknown mean and variances and places evidential priors (relying on the Dempster-Shafer Theory of Evidence [1] /subjective logic [2]) on those quantities to model uncertainty in a deterministic fashion, i.e. without relying on sampling as most previous approaches. This opens the door to online applications with fully integrated uncertainty estimates.\\nThis is a very relevant topic in deep learning, as deep learning methods are increasingly deployed in safety-critical domains, and I think that this works deserves its place at ICLR.\", \"pros\": \"1.\\tNovel approach to regression (a similar work has been published at NeurIPS last year for classification [3]), but the extension of the work to regression is important.\\n2.\\tThe experimental results show consistent improvement in performance over a wide base of benchmarks, scales to large vision problems and behaves robustly against adversarial examples.\\n3.\\tThe presentation of the paper is overall nice, and the Figures are very useful to the general comprehension of the article.\", \"cons\": \"1.\\tThe theory of evidence, which is not widely known in the ML community, is not clearly introduced. \\nI think that the authors should consider adding a section similar to Section 3 of Sensoy et al. [3] should be considered. Currently, the only step explaining the evidential approach that I found was in section 3.1, in a very small paragraph (between \\u201cthe mean of [\\u2026] to \\\\lambda + 2\\\\alpha.\\u201d). I believe that the article would greatly benefit from a more thorough introduction of concepts linked to the theory of evidence.\\n2.\\tThe authors briefly mention that KL is not well defined between some NIG distributions (p.5) and propose a custom evidence regularizer, but there\\u2019s very little insight given on how this connects to/departs from the ELBO approach. \\n\\nOther comments/questions:\\n1.\\t(p.1) I\\u2019m not sure to fully understand what\\u2019s meant by higher-order/lower-order distributions, could you clarify?\\n2.\\t(p.3) In section 3.1, the term in the total evidence \\\\phi_j is not defined.\\n3.\\t(p.3) Could you comment on the implications of assuming that the estimated distribution can be factorized? \\n4.\\t(p.4) Could you comment on the difference that there is between NLL_ML and NLL_SOS from a modelling perspective?\\n5.\\t(p.4) The ELBO loss (6) is unclearly defined, and not connected to the direct context. I would suggest moving this to the section 3.3, where the prior p(\\\\theta) used in eq. (6) is actually defined.\\n6.\\t(p.4) In equation (6), p_m(y|\\\\theta) isn\\u2019t defined, and q(\\\\theta|y) is already parameterized on y if I understand that q(\\\\theta)=p(t\\\\heta|y1,\\u2026,yN). Making the conditioning explicit in equation (6) might make the connection to the ELBO clearer. \\n7.\\t(p.7) I\\u2019m not sure to understand how the calibration of the predictive uncertainty can be tested by the ROC curves if both the uncertainty and estimates error are normalized. Could you also define more clearly what you mean by an \\u201cerror at a given pixel\\u201d? \\n8.\\tSpelling & typos:\\n-\\t(p.4) There are several typos in equation (8), where tau should be replaced with 1/\\\\sigma^2. \\n-\\t(p.8) In the last sentence, there is \\u201cntwork\\u201d instead of network.\\n-\\t(p.9) There is a typo in the name of J\\u00f8sang in the references. \\n-\\t(p.10) In equation (13), due to the change of variable, there should be a \\n-(1/\\\\tau^2) added; \\n-\\t(p.10) In equation (14), the \\\\exp(-\\\\lambda*\\\\pi*(\\u2026)) should be replaced with \\\\exp(-\\\\lambda*\\\\tau*(\\u2026)). \\n\\n[1] Bahador Khaleghi, Alaa Khamis, Fakhreddine O Karray, and Saiedeh N Razavi. Multisensor data fusion: A review of the state-of-the-art. Information fusion, 14(1):28\\u201344, 2013. \\n[2] Audun J\\u00f8sang. Subjective Logic: A formalism for reasoning under uncertainty. Springer Publishing Company, Incorporated, 2018. \\n[3] Sensoy, Murat, Lance Kaplan, and Melih Kandemir. \\\"Evidential deep learning to quantify classification uncertainty.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"comment\": \"Hello!\\n\\nTurns out you've submitted something which is incredibly similar to a concept I developed in my PhD Thesis, called Regression Prior Networks. Would be great if you guys cite it :)\", \"link_to_thesis\": \"https://mi.eng.cam.ac.uk/~ mjfg/thesis_am969.pdf (Remove space after ~ )\\n\\nOverall, your work seems quite nice, I like how you visually represented the Normal-inverse-Gamma distribution. Spent a while thinking how to do that. I'm curious how well the proposed approach generalizes to OOD detection without seeing any OOD training data though.\\n\\nCheers,\\nAndrey Malinin\", \"title\": \"Some related work\"}",
"{\"comment\": \"Probably because I don't quite understand Dempster-Shafer theory, I don't get why same work can't be placed under Bayesian Framework( last I heard there were papers claiming Dempster-Shafer theory was not a generalization of Bayesian Theory). On the other hand- Noise Contrastive Priors for Functional Uncertainty- https://arxiv.org/abs/1807.09289, seemed to be able to do Variational Inference on the output space( except theirs is a Gaussian prior ).\\n\\nMost people don't understand Dempster-Shafer theory, it would be of great help to all if you could highlight the key difference between Dempster-Shafer theory and Bayesian Theory, and why it is of importance.\\nBayesian and Frequentist approaches have become de-facto in DL community for quantifying uncertainty, it would be great if you could make an easy to read papers for newbies. I believe it would also help make this paper far more impactful this way.\\n\\nReally liked your work, though. Thanks\", \"title\": \"Beautiful work, would be great if Authors could clarify some doubts\"}"
]
} |
H1gBsgBYwH | Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint | [
"Jimmy Ba",
"Murat Erdogdu",
"Taiji Suzuki",
"Denny Wu",
"Tianzong Zhang"
] | This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples $n$, features $d$, and neurons $h$ tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at $h\approx n$ and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks. | [
"Neural Networks",
"Generalization",
"High-dimensional Statistics"
] | Accept (Spotlight) | https://openreview.net/pdf?id=H1gBsgBYwH | https://openreview.net/forum?id=H1gBsgBYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"e6Mdhsu7aF",
"Bklc6FAsoH",
"B1xu2HCssS",
"Hkee-HRoiH",
"rylsXynatr",
"ryxeE1cjYH",
"Bke6usYzKr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750680,
1573804482282,
1573803440215,
1573803255578,
1571827490913,
1571688231934,
1571097461417
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2506/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2506/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2506/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2506/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper focuses on studying the double descent phenomenon in a one layer neural network training in an asymptotic regime where various dimensions go to infinity together with fixed ratios. The authors provide precise asymptotic characterization of the risk and use it to study various phenomena. In particular they characterize the role of various scales of the initialization and their effects. The reviewers all agree that this is an interesting paper with nice contributions. I concur with this assessment. I think this is a solid paper with very precise and concise theory. I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for the comments and suggestions. The technical comments are addressed below:\", \"extending_result_to_other_target_functions\": \"We agree that the problem might be significantly more difficult for different target functions, and would like to make the following remarks:\\n1. Note that in our bias-variance decomposition, only the bias term depends on the target function. In other words, our result on the variance (including Theorem 4) would still be valid for other targets, such as two-layer neural network. One caveat is that for general target function, the output needs to be properly scaled since our current analysis in Section 5 relies on linearizing the network.\\n2. When the target function is a multiple-neuron neural network, deriving the bias term can be challenging. However, we note that under the same setup, the bias may be obtained when the teacher is a slightly more general single-index model, i.e. $y=\\\\psi(\\\\beta^\\\\top x)$ with Lipschitz link function $\\\\psi$, equivalent to a single-neuron network. For instance, the bias under vanishing initialization is the same as that of least squares regression on the input, which can be solved under isotropic prior on $\\\\beta$ via decomposing the activation function similar to Appendix C.5.\", \"parameter_count\": \"To clarify our statement in the discussion section, our current result requires $n,d,h$ to grow at the same rate, and thus $n = O(dh)$ is beyond the regime we consider. This is also true for previous works on double-descent in random feature model [Hastie et al. (2019)][Mei and Montanari (2019)]. When $h \\\\ll n$, it is not clear if the same analysis still applies (for instance approximating the network with a kernel model), and thus the instability of the inverse may not be the complete explanation of double-descent (if it appears). Characterizing the generalization in this regime would be an interesting direction.\", \"training_both_layers\": \"Thank you for the suggestion; we have included training both layers simultaneously as a future direction. We would like to briefly mention that under certain model parameterization and initialization, gradient flow on both layers may reduce to one of the three models we analyzed (see [Williams et al. (2019)]). More generally, our current result may be extended to cases where the dynamics of training both layers can be linearized (for instance initialization in the \\\"kernel regime\\\"), for which the learned model can be written down in closed-form.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you for the comments and suggestions. We agree that characterizing the generalization properties of neural network under different scalings is an important future direction.\", \"we_have_updated_the_manuscript_with_a_few_minor_modifications\": \"1) Figure on the population risk of sigmoid network (first layer optimized) in addition to SoftPlus; 2) additional remarks on the population risk of network in the kernel regime in Section 5.2; 3) corrected typos.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you for the comments and suggestions. As you pointed out, our current result in Section 5 does not apply to non-smooth activations -- understanding the generalization of ReLU networks would be interesting future work.\", \"we_have_updated_the_manuscript_with_a_few_minor_modifications\": \"1) Figure on the population risk of sigmoid network (first layer optimized) in addition to SoftPlus; 2) additional remarks on the population risk of network in the kernel regime in Section 5.2; 3) corrected typos.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Overview: This work is an interesting work to understand the generalization capabilities of a two layered neural network in a high dimensional setting (samples, features and neurons tend to infinity). It studies the conditions under which the \\\"double descent phenomenon\\\" may be observed.\", \"summary\": \"The work shows that in two layered neural networks with non-linearity\\n1) the double descent phenomenon of the bias-variance decomposition may be observed when the second layer weights are optimized assuming that the first layer weights are constant.\\n2) the bias-variance decomposition does not exhibit double descent when optimizing only the first layer with both vanishing and non-vanishing initialization of weights.\\n3) For vanishing initalization of weights for the first layer with non-linear activation , the gradient flow solution is asymptotically close to a two layered linear network. It is independent of overparametrization. However, the condition for this is smooth activation and the result does not hold for ReLU activation.\\n4) For non-vanishing initilization of the weights for the first layer with non-linear activation, the gradient flow solution is well approximated by a kernel model. However, the risk is independent of overparametrization.\\n\\nI believe this is an interesting work that needs to be accepted.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides exact bounds on the risk when training a two-layer neural network in an asymptotic regime. Namely, the paper considers training under the square-loss objective, a two-layer neural network with $h$ hidden units on inputs of dimension $d$ and training on $n$ samples. The asymptotic regime is considered by making all of $d$, $h$, $n$ go to $\\\\infty$, in a way that the ratio $d/n$ approaches $\\\\gamma_1$ and the ratio $h/n$ approaches $\\\\gamma_2$.\\n\\nThis paper considers the following scenarios of training described below, where the data is generated from a linear model on Gaussian inputs and with a zero-mean noise. The emphasis of the results is on understanding when a \\\"double descent\\\" type phenomenon occurs (\\\"Double descent\\\" is a recently coined phenomenon in literature where the risk, as a function of the \\\"complexity of the model\\\", initially has a classical U-shape behavior, but eventually decreases again once the complexity of the model exceeds the number of training points.)\\n\\n1. Training only the second layer: The risk is first decomposed into a bias and a variance term. An exact bound on the variance term of the risk is obtained. While the exact nature of the bound is rather complex to parse, the takeaway is that a double descent phenomenon is observed in terms of $\\\\gamma_2$, namely, the risk blows up when $h \\\\approx n$, but decreases as $h$ is increased beyond $n$.\\n\\n2. Training only the first layer: Two different regimes are considered here, depending on the scale of initialization, called \\\"vanishing\\\" and \\\"non-vanishing\\\" initializations. In both regimes, the risk is independent of $\\\\gamma_2$, that is, the risk does not depend on number of hidden units (although the risk bounds are different and there is an additional assumption in the case of non-vanishing initialization to ensure that the initialized network computes the zero function). In other words, a \\\"double descent\\\" phenomenon is not observed in this setting.\", \"recommendation\": \"I recommend \\\"weak acceptance\\\". The paper extends prior works that obtain asymptotic risk bounds on linear models to the setting of two-layer neural networks (where only one layer is trained). However, I am unable to assess the technical novelty of this work as it seems to heavily rely on prior work which in turn use techniques from random matrix theory.\", \"technical_comments\": [\"I felt that while it is valuable to have exact bounds on the risk, the form of the bounds are quite complex and hard to parse (especially in Thm 4, case of training only the second layer). Moreover, these bounds are just in the case where the teacher model is linear and while it is claimed that this could be relaxed to a more general class of functions, the specific bounds might change drastically. So any insights on the nature of these bounds will be valuable, especially with some comments on how these bounds change if the teacher model is itself realized as a 2-layer neural network.\", \"The parameter count of a 2-layer network with $h$ hidden units and input dimension $d$ is $O(dh)$. So perhaps it makes sense to study an asymptotic regime where $dh/n$ approaches $\\\\gamma$, instead of both d and h growing linearly in n. While this issue is hinted at in the discussion section, I don't understand the statement \\\"the mechanism that provably gives rise to double descent from previous works Hastie et al. (2019); Belkin et al. (2019) might not translate to optimizing two-layer neural networks.\\\"\", \"Another future direction that could be included in discussions is the setting where both layers are trained simultaneously.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors study the generalization error of two-layer neural nets, where an asymptotic point of view is taken. Their main results can be summarized as follows.\\n1. If only the second layer is optimized, they observe the double-descent phenomenon.\\n2. However, if only the first layer is optimized, the double-descent is not observed.\\nThis shows that recent results for certain linear models (e.g. Song, Montanari 2019) do not directly transfer to neural networks. As the authors point out, however, if a different scaling is used in the asymptotics, double descent might still be observed.\\n\\nI see the following strengths of the paper. \\n-This is a very well-written paper with a clear message.\\n-The result is important and gives new insights into the generalization properties of neural networks.\\n\\nIn my view, this is an interesting contribution, which should be accepted. \\n\\n---------\\n\\nThank you for your response. I will leave the rating unchanged.\"}"
]
} |
r1lEjlHKPH | Better Knowledge Retention through Metric Learning | [
"Ke Li*",
"Shichong Peng*",
"Kailas Vodrahalli*",
"Jitendra Malik"
] | In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. This makes deep neural nets ill-suited to continual learning. In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net. | [
"metric learning",
"continual learning",
"catastrophic forgetting"
] | Reject | https://openreview.net/pdf?id=r1lEjlHKPH | https://openreview.net/forum?id=r1lEjlHKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"U4wu2YQF4P",
"rylvJlj2sH",
"rkxA4junjS",
"B1x-F64hsS",
"BkglOaNhsH",
"B1lOLa42jr",
"r1eKEaEnoH",
"HkeBUpcTKr",
"HJlvCi32KS",
"HygFUMZKYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750651,
1573855198891,
1573845814227,
1573830009275,
1573829991890,
1573829968324,
1573829937063,
1571822925158,
1571765198550,
1571521104970
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2505/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2505/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2505/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2505/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2505/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Catastrophic forgetting in neural networks is a real problem, and this paper suggests a mechanism for avoiding this using a k-nearest neighbor mechanism in the final layer. The reason is that the layers below the last layer should not change significantly when very different data is introduced.\\n\\nWhile the idea is interesting none of the reviewers is entirely convinced about the execution and empirical tests, which had partially inconclusive. The reviewers had a number of questions, which were only partially satisfactorily answered. While some of the reviewers had less familiarity with the specific research topic, the seemingly most knowledgeable reviewer does not think the paper is ready for publication.\\n\\nOn balance, I think the paper cannot be accepted in its current state. The idea is interesting, but needs more work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the reply, here is the response:\", \"comment\": \"Q1: \\u201cYou state there is only one sample used per class to serve as anchor, but this is also the sample used for the nearest neighbor search at test time? During test time how many samples are being compared (is it one per class?) \\u201c\", \"a1\": \"Yes, those anchors are included during testing. At test time, we use a subset of the training set (e.g. for ImageNet, we used 50 samples per class with k=15, which were not used during training).\", \"q2\": \"\\u201crelationship to this (admittedly somewhat recent) paper https://arxiv.org/abs/1905.09447 \\u201d\", \"a2\": \"Though both methods adopt metric learning, they have some major differences:\\n\\t1. The referenced paper *trains* on stored training examples from the old tasks when training on a new task, whereas our method does not.\\n\\t2. Our kNN classifier generates the output by finding the most common label of the k nearest training samples, whereas the other method classifies using a softmax over distance in the embedding space. \\n\\t3. Our experiment setting (split MNIST/CIFAR/ImageNet) is more challenging and general compared to the setting in that paper (which they call the Split CIFAR10 \\u201cincremental class\\u201d setting), where the every subsequent task includes all classes that are in all previous tasks.\"}",
"{\"title\": \"A few quick questions\", \"comment\": \"Thanks for your response I will definitely take it into account and update my review later. I just had a few quick questions:\\n\\n-You state there is only one sample used per class to serve as anchor, but this is also the sample used for the nearest neighbor search at test time? During test time how many samples are being compared (is it one per class?) it seems like the seleciton of these samples would be kind of important. \\n\\n-I was also wondering if you have some thoughts on the relationship to this (admittedly somewhat recent) paper https://arxiv.org/abs/1905.09447\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thanks for your review. Below is our response:\", \"q1\": \"Interpretation of results in Table 1\", \"a1\": \"Our method achieves the second highest absolute performance and also the second smallest drop in performance on Set A after seeing Set B. The only method that achieved better performance on Set A after seeing Set B is LwF, which is because its performance on Set B is very poor. This indicates that our method achieves a good balance in learning well on new data and retaining performance on old data. It is also a lot less complex than methods that can achieve reasonable retention performance (DGR, DGR + distillation and RtF) - these methods require training a generative model (a VAE) to produce pseudo-training examples, whereas our method does not require training an external model. This also makes our method more broadly applicable, since it is more difficult to train high-performing generative models on some datasets, e.g.: CIFAR-10 (more on this below).\", \"q2\": \"Additional SGD + dropout baseline\", \"a3\": \"We have added results of other baselines on CIFAR-10 in Table 3. As shown, our method achieves the highest absolute performance on Set A after seeing Set B and also the smallest drop in performance on Set A after seeing Set B. Interestingly, on this more complex dataset, none of the baselines (except for LwF) are able to retain significant amounts of knowledge after training on Set B. As discussed above, methods that rely on generative models (DGR, DGR + distillation and RtF) no longer perform well because training high-performing generative models on CIFAR-10 is more difficult due to the increased complexity of the data.\", \"q3\": \"Other baselines for CIFAR-10\", \"q4\": \"Extension to the RL setting with continuous action space\", \"r4\": \"We plan to explore this in future work and consider replacing the k-nearest neighbour classifier with k-nearest neighbour regression. This requires changing the triplet loss to encourage samples within the neighbourhood that have similar outputs as the ground truth to be moved closer, and samples within the neighbourhood that have dissimilar outputs as the ground truth to be moved farther.\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"Thanks for your review. Below is our response:\", \"q1\": \"Experiments on more than two tasks\", \"a1\": \"We added new results on a five-task CIFAR-10 dataset (where each task is a consecutive pair of classes), along with results using the baselines, which are presented in Table 2. As shown, our method outperforms all baselines.\", \"q2\": \"Anchors and requirements for storage\", \"a2\": \"We only used one anchor per class, so only a minimal number of past data examples need to be stored (we\\u2019ve made this clearer in the manuscript). The anchor for a class can be chosen the first time an example from that class is encountered.\", \"q3\": \"Why normalization helps intuitively\", \"a3\": \"By normalizing vectors, we essentially project them onto a unit sphere. The benefit of this is that the sphere is a closed surface, unlike the original Euclidean space the vectors lie in. In Euclidean space, all points can easily be pushed very far away from each other, or brought very close together - neither of these scenarios help with classifying points correctly. On the other hand, pushing points away from a point on a unit sphere must make them closer to a point on the opposite side of the sphere - this property of the sphere helps us avoid either of the two scenarios above.\", \"q4\": \"Computational cost of the method\", \"a4\": \"Our method ~2 minutes on MNIST and ~12 minutes on CIFAR-10, which are comparable to the runtimes of the baselines.\"}",
"{\"title\": \"Response to Review 1 (Continued)\", \"comment\": \"\", \"q6\": \"\\u201cSimilarly the well known EWC is shown to simply not work at all for the very task it was designed for on the MNIST dataset. LwF and EWC simply not working to any degree seem to me like rather dramatic claims to make without any explanation.\\u201d\", \"a6\": \"EWC was again evaluated under a different setting than the setting considered in our paper. In the EWC paper, all the different datasets (a.k.a. tasks) are assumed to share the same classes (i.e.: the domain learning setting), and so the method only needs to discriminate among these classes. On the other hand, in our paper, the classes in each dataset are disjoint (i.e.: the task-agnostic learning setting), and so the method needs to discriminate among all the different classes across datasets. Moreover, EWC was evaluated on *permuted* MNIST in the original paper, whereas it was evaluated on *split* MNIST in our paper. In permuted MNIST, each task is a different random permutation of the pixels of images in MNIST, and all ten classes are represented in each task. In split MNIST, each task is a different subset of MNIST classes. The latter is more challenging because the method does not have access to all ten classes at any given time. See appendix B of https://arxiv.org/pdf/1805.09733.pdf for an explanation of why a method that works well on permuted MNIST may fail on split MNIST. As shown in prior literature (e.g.: https://arxiv.org/pdf/1904.07734.pdf), EWC achieves 94% average accuracy in the domain learning setting (which is consistent with the results in the original EWC paper), but only achieves 64% average accuracy when the dataset is changed to split MNIST. In the harder setting of class learning (which is still easier than the task-agnostic learning setting that we consider, but shares the same evaluation protocol as our setting), the average accuracy of EWC drops to 20% (this suggests 100% accuracy on the most recent task and close to 0% accuracy on four earlier tasks, which is consistent with the results reported in our paper).\", \"q7\": \"\\u201cIt is not clear if the baseline finetuning is done on only the top weights or the entire network. \\u201d\", \"a7\": \"The baseline finetuning was only performed on the top weights (all the convolutional layers are fixed). Finetuning on the entire network leads to even more forgetting.\", \"q8\": \"\\u201cAnother good baseline to consider is finetuning with cosine distance and only the top weights\\u201d\", \"a8\": \"As mentioned in Sect. 3.3, normalizing the output embedding vectors and minimizing the Euclidean distance (both of which we do already) is equivalent to maximizing cosine similarity. This is because || u - v ||^2 = <u - v, u - v> = ||u||^2 - 2 <u, v> + ||v||^2 = 2 - 2 <u, v>, and so cosine similarity is a monotonic transformation of Euclidean distance.\", \"q9\": \"\\u201cThe author state their method is agnostic to the task boundaries, its a bit unclear what this means in this context.\\u201d\", \"a9\": \"Regularization-based methods like EWC and SI need to estimate how important each parameter is to the previous tasks, which will be used to penalize changes to parameters when training on future tasks. If there weren\\u2019t task boundaries, i.e.: different tasks were interlaced, then it is unclear when this estimation should be done. Similarly, methods like LwF need to generate pseudo-labels on the data from the new task from the model trained on previous tasks. If different tasks were interlaced, it is unclear when the pseudo-labels should be generated. This is why methods like EWC, SI and LwF are not applicable to the setting without task boundaries.\\n\\nIn our case, the proposed method doesn\\u2019t need to do anything when one task ends and another one begins, and so the method can be naturally used in the setting without task boundaries.\"}",
"{\"title\": \"Response to Review 1\", \"comment\": \"Thanks for your review. Below is our response:\", \"q1\": \"\\u201cThe authors add a second term to the triplet loss that is essentially making the loss a combination of the triplet and siamese loss. It\\u2019s not really explained anywhere why they do this\\u201d\", \"a1\": \"This is in fact explained in the paper. As mentioned at the bottom of page 4, \\u201cIn practice, we found that training using L_{triplet} results in overlap between clusters for different classes, as shown in Figure 2. To discourage this, we add another term to the loss function to encourage tight clusters\\u201d.\", \"q2\": \"\\u201cIs the paper essentially pointing out this existing method (metric learning + nearest neighbor) is surprisingly effective for forgetting. If this is the case the authors should present it in this way I think.\\u201d\", \"a2\": \"As we stated at the end of the introduction section, \\u201cWe will show\\nthat this simple modification is surprisingly effective at reducing catastrophic forgetting\\u201d, which is already in line with this suggestion. \\n\\nThe simplicity of our method is an important advantage compared to other methods that add specialized regularizers, because (1) a simpler method is more broadly applicable because it can be used in settings when the signals required by the regularizers (e.g.: task identity or boundary - more on this later) are unavailable, (2) and is more flexible because it can be combined with other approaches. \\n\\nThe insight that metric learning can be used effectively for continual learning is novel and did not appear in prior work to our knowledge - this represents a new perspective for continual learning research that catastrophic forgetting may be partly caused by limitations in the model itself rather than problems with the training objective.\", \"q3\": \"\\u201cAlthough triplet loss can often yield reasonably performance on classification problems it tends to not perform as well as cross entropy loss, this is observed in other works as well as this one.\\u201d\", \"a3\": \"While this may be true if the goal is to maximize single-task performance, the point of this paper is to demonstrate that metric learning is quite effective if the goal is to minimize catastrophic forgetting. Future improvements to metric learning techniques could help narrow the gap between triplet loss and cross-entropy loss on single tasks, but are orthogonal to our method.\", \"q4\": \"\\u201cA major question of mine: it is not clear from the method nor experiments what samples are stored after task A for the kNN classifier. Is it all of the data samples from the previous task?\\u201d\", \"a4\": \"We only store one example from each class (to serve as anchors).\", \"q5\": \"\\u201cMNIST experimental comparisons are currently suspect. It is very surprising that LwF does so poorly, do the authors have some explanation for this. LwF is typically a reasonable baseline for these 2 task settings (e.g. https://arxiv.org/pdf/1704.01920.pdf).\\u201d\", \"a5\": \"LwF in the paper the reviewer referenced is evaluated under a different setting than the setting considered in our paper. In the referenced paper, the method is assumed to know which dataset (a.k.a. task) each test example belongs to (i.e.: the task learning setting). As a result, it only needs to discriminate among the classes within that dataset. In our paper, the method is not assumed to know which dataset each test example belongs to (i.e.: the task-agnostic learning setting), and so is required to discriminate among the classes across all datasets. As shown in various prior papers (e.g.: https://arxiv.org/pdf/1810.12488.pdf and https://arxiv.org/pdf/1904.07734.pdf), LwF only achieves an average accuracy in the 20% range under the evaluation setting we consider, whereas it achieves an a near-perfect accuracy in the easier evaluation setting (i.e.: the task learning setting) considered in the original LwF paper.\\n\\n(continued below...)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a possible way to mitigate catastrophic forgetting by using a k-nearest neighbor (kNN) classifier as the last layer of a neural network as opposed to a SoftMax classifier. I think this an interesting and possibly novel use of a kNN layer (I haven't seen similar uses although I'm not that familiar with the specific research area). At the same time it's not presenting a ground breaking new algorithm or anything like that.\\n\\nOverall the paper is fairly well written and not too hard to follow. I would say overall results in Table 1 are positive although the authors' approach has the lowest performance after just training on set A if that initial accuracy is important, and also doesn't have quite as high of an accuracy on test B compared to most of the other baselines. Additionally, if you add the accuracy on both set A and set B after training on set B the sum is slightly higher for Rtf. If you look at the minimum accuracy between set A and set B after training on set B, however, the authors' method has the highest value which might be what someone is looking to maximize. \\n\\nOne weakness of this is paper is that I think there are other baselines that should be compared against in Table 1 such as something as basic as SGD with dropout (some of the baselines that are compared against in Table 1 were compared against SGD with dropout in their citations). There are a number of additional approaches outlined in https://www.cs.uic.edu/~liub/lifelong-learning/continual-learning.pdf. Also maybe even something with self attention such as Serra at al. https://arxiv.org/pdf/1801.01423.pdf.\\n\\nAnother potential issue I have with this paper is that it only reports results for the authors' method and the vanilla baseline for more complex CIFAR-10 and ImageNet data sets in Table 2. Assuming there aren't restrictive assumptions for some of the methods that prevent them from being run on the other data sets (at least SI was previously evaluated on CIFAR-10), I would like to see how other baselines perform on these more complex datasets too.\\n\\nThe lack of some more baselines such as SGD with dropout, and not reporting the performance of the same baselines from Table 1 in Table 2, cause me to be very borderline on this paper. I do appreciate the sensitivity analysis and ablation study provided.\\n\\nAs alluded to in future work I'm curious how the authors' approach might be applied to reinforcement learning, and if there could be a way to deal with continuous action spaces in RL.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper applies metric learning to reduce catastrophic forgetting on neural networks. By improving the expressiveness of the final layer, the authors claim that lower layers do not change weights as much, leading to better results in continual learning. They provide large-scale experiments on different datasets.\\n\\nI like the idea that the authors propose and the intuition for why it works, and the paper is well-written. However, I have some concerns and questions. My main concern is that experiments are only performed in the two-task setting, which is highly restrictive.\\n\\nThe authors claim that they tackle the general 'continuous task-agnostic learning' setting. However, they only test on the two-task setting. There are various problems with considering only a two-task setting (see for example Farquhar and Gal, \\\"Towards Robust Evaluations of Continual Learning\\\"). It is too easy to optimise parameters and methods to work in the two-task setting that will not generalise to more than two tasks, which the authors seem to claim. I would need to see experiments on more than two tasks. Aside from this, the experiments seem detailed, with a reasonable baseline, large-scale experiments (on ImageNet), and with an ablation study. \\n\\nIt seems to me like the anchors need to be chosen before training. This means that this method requires memory / storage of past data examples. It is usually fine to do store a small subset of examples in continual learning, but should be made explicit, because it may not always be possible (eg if there are data privacy laws). \\n\\nI do not understand the reason why the output embeddings need to be normalised (Section 3.3)? I can see from Table 4 that it improves results, but do not see any intuition.\\n\\nI would also like to see the computational cost of this method, perhaps as a run-time compared to the baseline. There are many hyperparameters to tune on the validation set which may slow the method down. The sensitivity analysis did not consider changing 'd' or 'M', which seem like crucial hyperparameters to me.\\n\\n------------\", \"edit\": \"I will keep my score after the the discussion with authors. Although the paper has improved in my opinion, I still recommend Weak Reject. I very much appreciated the 5-task CIFAR-10 results. However, there are simple baselines in this setting that I believe need to be explored and reported. Namely, baselines but with samples, eg EWC+samples, akin to the RWalk paper that AnonReviewer1 mentions (https://arxiv.org/pdf/1801.10112.pdf). This is because the proposed method also uses samples. Going from the RWalk paper, this improves results for the baselines considerably, but this may depend on number of samples etc. I understand there was not much time during the rebuttal period to include this. I hope that the authors will consider doing so in the future.\\n\\nThe discussion/explanation regarding 'task-agnostic' (train and test time) and also regarding how the anchors are chosen needs to be made clearer.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers the use of a metric learning approach in a continual/lifelong classification settings. Experiments show in the case of two tasks forgetting can be minimized by using the approach.\\n\\nMethods\\nThe proposed method appears to be a standard triplet loss. The authors add a second term to the triplet loss that is essentially making the loss a combination of the triplet and siamese loss. It\\u2019s not really explained anywhere why they do this and whether its essential to the performance. \\n\\nIs there anything specific to continual learning done or is the paper essentially pointing out this existing method (metric learning + nearest neighbor) is surprisingly effective for forgetting. If this is the case the authors should present it in this way I think. \\n\\nAlthough triplet loss can often yield reasonably performance on classification problems it tends to not perform as well as cross entropy loss, this is observed in other works as well as this one.\", \"a_major_question_of_mine\": \"it is not clear from the method nor experiments what samples are stored after task A for the kNN classifier. Is it all of the data samples from the previous task?\\n\\nExperiments\\nThe experimental results consider a custom continual learning setup where there is two sets of categories. Overall the experiments seem lacking at the moment in rigorous comparisons. \\n\\nMNIST experimental comparisons are currently suspect. It is very surprising that LwF does so poorly, do the authors have some explanation for this. LwF is typically a reasonable baseline for these 2 task settings (e.g. https://arxiv.org/pdf/1704.01920.pdf). Similarly the well known EWC is shown to simply not work at all for the very task it was designed for on the MNIST dataset. LwF and EWC simply not working to any degree seem to me like rather dramatic claims to make without any explanation. \\nCryptically the fine-tuning baseline described in 4.2 is not shown here for MNIST? This seems a major oversight\\n\\nCIFAR10/Imagenet Experiments\\nIt is not clear if the baseline finetuning is done on only the top weights or the entire network. Both of these baselines should be considered. Another good baseline to consider is finetuning with cosine distance and only the top weights as in https://arxiv.org/pdf/1804.09458.pdf and other recent works should also be considered\\n\\nWhy do the authors not include any of the baselines from MNIST experiments here, for example LwF.\\n\\nAblations study the need for normalization and dynamic margin, it seems these are helpful for accuracy and forward transfer (and not as critical for minimizing forgetting).\\n\\n\\nThe author state their method is agnostic to the task boundaries, its a bit unclear what this means in this context. The procedure is not online and the labels of the samples are being used? If the authors are referring to the need to add additional outputs to the \\u201cvanilla\\u201d model this seems like it can be trivially addressed by simply saying outputs are added the first time a new class is seen thereby making it agnostic to the boundary in the same sense as this method. \\n\\nClarity \\nCan be problematic at times. Although all the elements of the approach are outlined the motivations are overly wordy and repetitive making them actually hard to follow. \\n\\n-(minor) first/2nd paragraph of 3.1 seems a bit redundant making it hard to follow\\n\\nOverall I think the idea to consider metric learning and local adaptation for continual learning is interesting, however the current work is currently lacking in both experimental evidence (appropriate comparisons) and clear motivation/difference to existing work for its particular instantiation of this idea. \\n\\n++++Post Rebuttal++++\\n\\nThank you for your detailed responses.\\n\\nThe clarification about \\u201ctask-agnostic\\u201d for the experiments does make them look more relevant than I had previously assessed. I do want to note that the language used for this is inconsistent with the ones used in other papers, which typically calls this a \\u201cshared-head\\u201d setting (https://arxiv.org/pdf/1801.10112.pdf, https://arxiv.org/pdf/1805.09733.pdf, https://arxiv.org/pdf/1903.08671.pdf ). It is also somewhat inconsistent with the authors own definition of \\u201ctask agnostic learning\\u201d given in the introduction of this paper which implies it is something related to task boundaries at training time, in fact this is something related to availability of the task id at test time. I suggest the authors to make this more clear. Furthermore, the authors should highlight all this in the experiment text, e.g. noting EWC does poorly but this is because we use a different protocol than this and this paper etc.\\n\\nRegarding the experiments under this light they do look more reasonable. Indeed it has been observed that EWC works poorly in the shared-head setting https://arxiv.org/pdf/1801.10112.pdf\\nRegarding the new 5 task CIFAR-10 the results are interesting, however I will point the authors to the work above (Rwalk) which also reports results in this setting better than theirs (but not by too much). \\n\\nI do however still have issues regarding the memory usage of the method, specifically which data needs to be stored from previous tasks. It is still not completely clear and I find obfuscated since just one sentence not even fully answering the concern about this was added to the manuscript despite myself and another reviewer asking about it. My understanding based on the (somewhat conflicted responses) of the authors is they store a substantial amount of prior task data, but most of this is only used at test time. For example for imagenet as much as 1000 images/class are stored for testing time. This begs the question why not use this data for training as well if it is allowed to be used by the model at testing time (and therefore preserved from the first task), why is the storage cost of this data not considered and how do the authors justify this still being a lifelong learning setup. As an alternative, why can't one use a much bigger fully parametric model that uses the same amount of storage as the authors model + stored images. It seems it is not fair to compare these to methods that cant utilize this large storage amount. \\n\\nFinally its not clear if this data is stored as raw images or somehow stored as embeddings. If it is stored as embeddings this would require some discussion on how the authors avoid representation drift when the next task is training. If the authors store raw images, it means at evaluation time the entire raw dataset needs to be re-encoded, therefore the model can\\u2019t perform easily anytime inference. \\n\\nUnfortunately the discussion period ended but I would have liked more clarification on this, on the other hand these pieces of information should really have been in the manuscript in the first place. \\n\\nOverall, my impression of the paper is improved. But I do think it could use some further writing revisions to emphasize/clarify key points: a) the method is not new (it says e.g. in abstract \\u201cnew model\\u201d which is misleading) but its application in CL is under-explored b) the experiments show poor performance on existing methods because most of those are not designed nor work well for the shared head \\u201ctask agnostic\\u201d setting, while metric learning handles it gracefully. c) be explicit about what is the memory being stored when moving onto the next task (this should be somewhere visible and explicit) and how this is justified\"}"
]
} |
BJe4oxHYPB | Winning the Lottery with Continuous Sparsification | [
"Pedro Savarese",
"Hugo Silva",
"Michael Maire"
] | The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts. The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.
In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs. | [
"continuous sparsification",
"tickets",
"lottery",
"faster",
"iterative magnitude pruning",
"lottery ticket hypothesis",
"frankle",
"carbin",
"neural networks",
"possible"
] | Reject | https://openreview.net/pdf?id=BJe4oxHYPB | https://openreview.net/forum?id=BJe4oxHYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"15XxZlBJ_",
"S1gZUJAdsH",
"B1xtv2p_oH",
"HkgZhjp_jH",
"rJgsviadjB",
"ryevJia_jS",
"BJl-_9auoH",
"Bklcxb3i5r",
"SJlzXqz19r",
"S1xheijnFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750621,
1573605192592,
1573604449435,
1573604264975,
1573604195077,
1573604062645,
1573603945176,
1572745458399,
1571920410132,
1571760883795
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2504/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2504/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2504/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2504/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2504/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new algorithm called Continuous Sparsification (CS) to search for winning tickets (in the context of the Lottery Ticket Hypothesis from Frankle & Carbin (2019)), as an alternative to the Iterative Magnitude Pruning (IMP) algorithm proposed therein. CS continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. The papers shows empirically that CS finds lottery tickets that outperforms the ones learned by ITS with up to 5 times faster search, when measured in number of training epochs.\\n\\nWhile this paper presents a novel contribution of pruning and of finding winning lottery tickets and is very well written, there are some concerns raised by the reviewers regarding the current evaluation. The paper presents no concrete data on the comparative costs of performing CS and IMP even though the core claim is that CS is more efficient. The paper does not disclose enough detail to compute these costs, and it seems like CS is more expensive than IMP for standard workflows. Moreover, the current presentation of the data through \\\"pareto curves\\\" is misleadingly favorable to CS. The reviewers suggest including more experiments on ImageNet and a more thorough evaluation as a pruning technique beyond the lottery ticket hypothesis. We recommend the authors to address the detailed reviewers' comments in an eventual ressubmission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Will Review Response Soon!\", \"comment\": \"Thank you to the authors for their detailed response. I'll try to take a look soon and offer more feedback.\"}",
"{\"title\": \"Paper revision\", \"comment\": [\"We thank all the reviewers for the valuable feedback. We have revised the paper, incorporating several changes suggested in the reviews. The major changes are:\", \"Experiments for Figure 3: added error bars computed from 3 different runs, added curves showing performance of tickets when re-initialized (dashed lines).\", \"Added Section 4.3, where we perform one-shot pruning on a VGG network, and compare against magnitude pruning and stochastic l0 regularization.\", \"Added an Appendix with empirical analysis on how each hyperparameter affects the performance and sparsity of tickets found by our method.\", \"We will make our code publicly available in the near future.\"]}",
"{\"title\": \"Response to reviewer 4 [1/2]\", \"comment\": \"Thank you for your extensive comments and detailed review. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n- \\u201cthe proposed technique is inconsistent across runs\\u201d\\n\\nOur method is in fact consistent across runs. To clarify, Figure 3 shows both a pareto curve (green) and multiple runs of Continuous Sparsification, each with different hyperparameter settings. For a given set of hyperparameters, the behavior of our method is consistent. The different sparsity trajectories are attained by running our method with different hyperparameter settings (which directly, and intuitively, affect the final sparsity of the model).\\n\\nWe have re-ran our experiments with 3 different random seeds for each hyperparameter setting, and have added error bars to Figure 3 accordingly. Note that the variance of the tickets\\u2019 performance from the 2nd round onward (2nd and next markers of purple curves) is smaller than the variance of tickets found by IMP. We have also added an Appendix with hyperparameter analysis for our method, showing that changes in hyperparameter values have consistent impacts on the performance and sparsity of found tickets.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cI would be particularly interested in seeing how this technique performs on a large-scale network for ImageNet \\u201c\\n\\nWe have not performed these experiments due to the computational costs of fully training an ImageNet model for many iterations in a sequential fashion. In particular, Frankle et al. (\\u201cStabilizing the Lottery Ticket Hypothesis\\u201d) train a ResNet 50 for over 1300 epochs on ImageNet. Nonetheless, we are currently working on evaluating our method when training a ResNet 50 on ImageNet, and will add them to the camera-ready version of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cDid the paper study Resnet-18 (a network designed for ImageNet with 11.2M parameters) or Resnet-20\\u201d\\n\\nThanks for pointing this out. We used a ResNet-20 for our experiments, and we have updated the paper to clarify this.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cAre the values of s0 for CS sampled from a distribution, or are they all the same, fixed value?\\u201d\\n\\nWe initialize all parameters of the soft mask with the same value. Note that the value used significantly affects the final sparsity of the model: in Figure 3, different runs of CS had all hyperparameters fixed except for s_0. We have added more details on how s_0 was chosen to generate the results in Figure 3, and the new section in the Appendix shows how s_0 affects the sparsity of the tickets.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cDo the extra parameters lead to longer wall-clock training times? If you are making an argument about a more efficient technique, this is an important consideration.\\u201d\", \"we_have_measured_the_wall_clock_training_time_of_iterative_magnitude_pruning_and_continuous_sparsification_on_a_1080_ti\": \"our method resulted in 15% extra wall-clock time per training epoch. We have added this information to the revised paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cFigure 2 includes the same graph twice. I believe a different graph should appear on the left, and I am eager to take a look at it in a revised version of the paper.\\u201d\\n\\nThanks for pointing this out. We have corrected this in the revised version of the paper.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cIt does not appear that multiple replicates of each experiment were run with different network initializations in Figure 3.\\u201d\\n\\nAs mentioned above, we have updated Figure 3 to have error bars computed from runs with 3 different random seeds.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cWhat is the performance of CS as a pruning technique?\\u201d\\n\\nWe have added new experiments showing that our method yields competitive results when pruning VGG trained on CIFAR, outperforming both magnitude pruning and stochastic l0 regularization (with straight-through gradient estimation). Results are presented in Section 4.3: our method successfully prunes 99.7% of the parameters while still maintaining over 90% test accuracy, while both magnitude pruning and stochastic sparsification suffer from severe performance degradation when over the pruning rate is over 98%, achieving less than 70% accuracy. We will add results for the stochastic l0-based method in Louizos et al., which uses the hard-concrete distribution, to the camera-ready version of our paper.\"}",
"{\"title\": \"Response to reviewer 4 [2/2]\", \"comment\": \"- \\u201cThe last paragraph of Section 4.1 is missing important details (...) for which hyperparameters, and how many hyperparameters had to be explored to find these values?\\u201d\\n\\nWe have added details on exactly what hyperparameters were used to achieve the results in Figure 3. In particular, we used a fixed final temperature of 250, a penalty lambda of 1e-10, and varied s_0 across 6 different values (-0.05, -0.03, -0.02, -0.01, -0.005, 0) to control the sparsity of the found ticket. We also added a section to the Appendix showing how each hyperparameter affects our method: in a nutshell, our results are robust to changes in both the final temperature and lambda (we used a fixed starting temperature of 1 across all runs), showing that our main hyperparameter is the mask initialization s_0.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cThe final sparsity appears to be a function of the values of s0, lambda, and beta\\u201d / \\u201cFor the same values of s, lambda, and beta, how widely do the final sparsities and accuracies vary\\u201d\", \"our_responses_above_also_address_these_points\": \"in particular, we have added error bars to Figure 3 to show how the sparsity varies across runs with the same hyperparameter settings; we also added a section to the Appendix studying how each hyperparameter affects our the sparsity of the tickets.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cIf I wanted to find a winning ticket with a particular sparsity using CS, what would the procedure look like for doing so? \\n\\nThis is an excellent question. As mentioned previously, in practice the sparsity of the final model is almost fully determined by the value of s_0. To achieve a desired sparsity, one can either perform runs in parallel with different values for s_0, or perform sequential binary search if the goal is to minimize the overall computational cost and not wall-clock time (new results in the Appendix show that sparsity is monotonically decreasing with s_0, making binary search possible). In practice, we observed that s_0 = 0 yields high accuracy and around 70% final sparsity for both ResNet-20 and VGG, and s_0 in {-0.01, -0.05, -0.1} is typically enough to achieve well-spaced sparsity values up to 95%.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cWhy didn't you also try it with weight rewinding?\\u201d\\n\\nWe observed empirically that rewinding offers an explicit trade-off between required training time and how good the mask is after many training iterations. More specifically, training with rewinding allows for our method to perform many iterations without the mentioned performance drop \\u2014 however, training without rewinding increases the performance in the first iterations, since the network does not need to be fully re-trained at each iteration. We will add an extra section to the Appendix with empirical results showing this phenomena to the camera-ready version.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cDo these winning tickets have different properties from [1, 2]? Can they be reinitialized? Can they be rewound earlier?\\u201d\\n\\nWe have added empirical results showing that tickets found by Continuous Sparsification cannot be re-initialized without significant performance degradation (dashed lines in Figure 3). We also confirmed that rewinding to epoch 2 is necessary to find winning tickets on ResNet 20: disabling it (rewinding to initialization) yields tickets that underperform the original, dense network.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cit would be useful to include random reinitialization or random pruning baselines as in [1, 2] \\u201c\\n\\nWe have added random initialization for tickets to the revised version of the paper (dashed curves in Figure 3). We will add random pruning baselines to the camera-ready version.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cwhat happens if you run IMP such that each iteration prunes to the same sparsity as achieved by each iteration of CS?\\u201d\\n\\nThis is an interesting venue for exploration. In practice, we observe that when the final sparsity is large enough (e.g. over 80%) CS performs virtually all pruning in the first one or two iterations, hence replicating the pruning rate with IMP would be similar to running it for a single iteration with a large pruning rates. We have added results where we run IMP with pruning rates larger than 20% to the Appendix: in particular, there is visible performance degradation of the tickets found by IMP even with a pruning rate of 40%. We will add extra experiments to the camera-ready version, where we run IMP to mimic the pruning rate of CS in each iteration.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cWhat are the range of results achieved if IMP and CS are run \\\"one-shot\\\" (pruning after just one iteration; in the case of IMP, pruning directly to a desired sparsity)?\\u201d\", \"we_have_addressed_this_point_above\": \"we added Section 4.3 where CS is compared against magnitude pruning and stochastic l0 regularization in the task of one-shot pruning on VGG.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you for the review and comments. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n- \\u201clittle connection to the lottery ticket\\u201d\\n\\nOur method was designed in the scope of finding winning tickets in large networks. A notable difference between pruning and ticket search (as done with IMP) is that ticket search requires more iterations compared to pruning, resulting in computational costs that might be prohibitive (which is not typically the case for pruning). The core idea of Continuous Sparsification is to use a deterministic re-parameterization to learn the masks, hence avoiding having to minimize a stochastic objective or use gradient estimators which create additional variance. The main design goal of Continuous Sparsification is to be fast: a concern that is due to ticket search.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cit does not give the comparison between lottery ticket and random initialization based on new pruning method.\\u201d\\n\\nWe have added curves to Figure 3 (dashed lines) presenting the performance of sub-networks when they are randomly re-initialized instead. The performance is visibly inferior to sub-networks whose parameters have been rolled back to their original initialization, which agrees with the observations in Frankle & Carbin.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cAs a pruning method, it does not show the results on common models like VGG and DenseNet\\u201c\\n\\nWe have added new results showing that, when pruning a VGG trained on CIFAR, our method outperforms both magnitude pruning and stochastic l0 regularization by a wide margin. Results are presented in Section 4.3: our method successfully prunes 99.7% of the parameters while still maintaining over 90% test accuracy, while both magnitude pruning and stochastic sparsification suffer from severe performance degradation when over the pruning rate is over 98%, achieving less than 70% accuracy.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cgives results of ResNet-18 while the setting is not the best setting. Normally we need to train at least 120 epochs\\u201d\\n\\nWe have precisely followed the training protocol in Frankle et al., where a ResNet is trained for over 15 iterations, each consisting of 85 epochs. We have also used exactly the same hyperparameters, including learning rate, batch size, weight decay, and learning rate schedule. This was done for multiple reasons, including to have a fair comparison between the two methods, and to show readers that, as our results with Iterative Magnitude Pruning match the ones reported in Frankle et al., our implementation is consistent with theirs.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cIt also does not give the experiment on ImageNet\\u201d\\n\\nWe have not performed these experiments due to the computational costs of fully training an ImageNet model for many iterations in a sequential fashion. In particular, Frankle et al. (\\u201cStabilizing the Lottery Ticket Hypothesis\\u201d) train a ResNet 50 for over 1300 epochs on ImageNet. Nonetheless, we are currently working on evaluating our method when training a ResNet 50 on ImageNet, and will add them to the camera-ready version of the paper.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for your comments. We address your points individually below \\u2014 please let us know if we can clarify or address any further concerns.\\n\\n\\n\\n- \\u201cit basically combines two ideas that are already in the literature\\u201d:\\n\\nOur method significantly differs from both Zhou et al. and Louizos et al. as we use a novel deterministic re-parameterization for the mask, which avoids the need of gradient estimators and makes training and thus ticket search significantly faster.\\n\\nThe baseline \\u2018Iterative Stochastic Sparsification\\u2019 (Algorithm 2) is what would be a naive combination of ticket search and the method in Zhou et al. that directly optimizes the mask. Our experiments show that our method, Continuous Sparsification, yields significantly better results in both learning a supermask (Figure 2) and ticket search (compare green and red curves in the left plot of Figure 3). In particular, the baseline Iterative Stochastic Sparsification underperforms Iterative Magnitude Pruning, showing that our deterministic re-parameterization was indeed necessary to push the state-of-the-art performance in ticket search.\\n\\nTo further support this, we have added new results showing that, when pruning a VGG network trained on CIFAR, our method outperforms both magnitude pruning and stochastic l0 regularization. Results are presented in Section 4.3: our method successfully prunes 99.7% of the parameters while still maintaining over 90% test accuracy, while both magnitude pruning and stochastic sparsification suffer from severe performance degradation when over the pruning rate is over 98%, achieving less than 70% accuracy. These results show that our deterministic re-parameterization is fundamentally different than the stochastic re-parameterizations proposed in previous works such as Zhou et al.: it provides superior performance in both ticket search and one-shot pruning, while at the same time being simpler by not requiring gradient estimators.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n- \\u201cIn particular, the fact that continuous sparsification can find winning tickets without any parameter rewinding is fascinating and deserves further investigation. Do the authors have any sense for why this works, when prior work suggests that rewinding is necessary for sufficiently complicated models and datasets?\\u201d\", \"we_hypothesize_that_rewinding_is_necessary_only_if_ticket_search_consists_of_many_training_epochs\": \"in this case, the parameters can \\u2018move\\u2019 far from the initialization values, at which point the mask might not be suitable for the parameters close initialization. More specifically, after T parameter updates, we have weights w_T and mask m_T, where m_T was computed from either w_T or w_{T-1} (using the magnitudes of w_T in IMP, or after a gradient update from {T-1} in CS). However, since the ticket is given by (w_k, m_T), for small k, if w_T differs too much from w_k (say, ||w_T - w_k|| is large), then m_T might be highly suboptimal for w_k, and the ticket can fail to be successfully re-trained. Since in our method the number of updates T is significantly smaller than in IMP, the need to rewind the weights back to w_k is diminished. This is briefly described in the last paragraph of Section 4.2.\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014-\\n\\n\\u201c\\\"Results are presented in Figure 2\\\" both instances of \\\"SP\\\" should be \\\"SS\\\" instead.\\u201d\\n\\nThanks for pointing out the typo \\u2014 we have fixed it in the revision.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"\", \"to_the_authors_of_paper_2504\": \"I have posted a private comment for the reviewer/AC discussion period based on your author responses and your revised paper. I want you to be able to see my full response, but I can't post additional public comments to the paper. As such, I'm editing my review with exactly what I sent to the AC.\\n\\nThank you to the authors for your thoughtful rebuttal and for updating the paper with both new text and (more impressively) new experiments. I read all three of your rebuttals and re-read the revised paper in detail. My comment to the AC is at the bottom of this message. I don't know if you will get an alert that the review was updated, but I hope you get a chance to take a look.\\n\\n===========================================\\n\\nEXECUTIVE SUMMARY OF REVIEW\", \"summary_of_paper\": \"This paper proposes a technique that simultaneously trains a neural network and learns a pruning mask. The goal of this technique is to make it faster to retroactively find sparse subnetworks that, from a point early in training, could have trained in isolation to the same performance as the full network (\\\"winning tickets\\\"). The best subnetworks found by this technique outperform those found by existing techniques [1] at every sparsity level.\", \"summary_of_review\": \"The technique introduces several new hyperparameters whose values are asserted without describing the extent of the search necessary to find them; it unclear whether the cost of the search cancels out the efficiency gains. In addition, the proposed technique is inconsistent across runs. The sparsity and accuracy of the subnetworks it produces vary greatly, and there are no hyperparameters to explicitly control these outcomes. It is also unclear whether results vary from run to run even with the same hyperparameters. As such, this technique is not clearly a cost reduction as compared to existing approaches in [1] when it comes to studying lottery tickets. Moreover, the paper implicitly proposes a new pruning technique, and it should be evaluated against other related techniques in the pruning literature. Finally, the evaluation needs more rigor.\", \"conclusion\": \"It is difficult to determine whether the technique is an improvement over existing methods since the true costs of using it in practical workflows are unclear. Weak reject.\", \"opportunity_to_improve_score\": \"Include a more detailed analysis of the overall costs of finding winning tickets at a target sparsity using the proposed technique, particularly including the costs of hyperparameter search necessary to do so. Include multiple replicates of experiments, experiments on more networks, and information about the variance of performance across runs with the same hyperparameters. Compare against other state-of-the-art pruning techniques.\\n\\nPROBLEM STATEMENT AND PROPOSED SOLUTION\", \"problem\": \"Winning tickets are currently expensive to find. The best known procedure [1, 2] is \\\"iterative magnitude pruning\\\" (IMP), which involves repeatedly training a network to completion, pruning by a fixed percentage, and \\\"rewinding\\\" weights to an early iteration of training until the network reaches the desired level of sparsity. To reach sufficient sparsity on standard networks, this procedure must be repeated 10 or more times.\", \"goal\": \"To propose a procedure that finds winning tickets more efficiently.\", \"significance\": \"A more efficient procedure would make it easier to study the lottery ticket phenomenon. (Whether studying that phenomenon is, itself, significant is debatable.) Personally, I have extensive experience using IMP to find winning tickets, so techniques to reduce the cost of finding winning tickets would be very valuable for my work.\", \"proposed_solution\": \"The authors propose \\\"continuous sparsification\\\" (CS), which makes it possible to learn which weights to prune simultaneously with the weights themselves. Accompanying each parameter w in the network is a second parameter s. The actual weight used in the network is w * sigmoid(s * beta), where beta is a temperature hyperparameter. If the learned value of s is such that sigmoid(s * beta) is approximately 0 at the end of training, then the parameter has been pruned. The value of beta increases exponentially throughout training, meaning the output of the sigmoid will be closer to a hard 0 or 1, producing a pruning mask. To ensure sparsification happens, a regularization term lambda |sigmoid(beta * s)| is added. When run over multiple iterations, values of s are reset to their original values for weights that are not pruned. In addition, weights are either rewound (as in IMP) or left at their final values (as in [3]).\", \"novelty\": [\"The paper is slightly novel. The technique is a variation of those proposed in [4] and [5], but the changes are meaningful. This is the first known use for finding winning tickets, but any pruning technique could hypothetically be used for this purpose. In effect, the paper proposes a new pruning technique that is primarily evaluated for its efficacy in finding winning tickets.\", \"TECHNICAL REVIEW\", \"This technique introduces several new hyperparameters: lambda, initial and final betas, and the initial values for s. The paper suggests good values for these hyperparameters for both networks considered. These values were presumably found through hyperparameter search of some kind. How extensive was this hyperparameter search, and what is the range of \\\"good\\\" combinations of values? This is not just a methodological footnote; this paper's stated goal is to improve the efficiency of finding winning lottery tickets, and - if good hyperparameters are hard to find - then that defeats the purpose of a more efficient technique. IMP, while far less efficient epoch for epoch as compared to CS, requires no hyperparameter search; global pruning 20% of parameters per iteration seems to work well in general [2]. In a revised version of the paper, I would be eager to learn more about this set of tradeoffs, since that is what matters in practice.\", \"The authors only study two networks: a toy convolutional network and a small Resnet. It is hard to draw broad conclusions from such a limited set of examples. I would be particularly interested in seeing how this technique performs on a large-scale network for ImageNet (e.g., Resnet-50), since these are the situations where IMP becomes particularly cost-prohibitive. If the technique works well in these settings, it would enable lottery ticket research at much larger scales than is currently possible. If the technique works as efficiently at this scale, then doing could even be feasible during the rebuttal period. (I acknowledge getting experiments working on ImageNet is no small undertaking in terms of both engineering time and cost, but it would improve my confidence to see those results.)\", \"The authors did an admirably careful job replicating the networks in [1], which include a variety of nonstandard hyperparameters.\", \"Did the paper study Resnet-18 (a network designed for ImageNet with 11.2M parameters) or Resnet-20 (a network designed for CIFAR-10 with 272K parameters)? Frankle et al. [1, 2] describe Resnet-20 in their appendices but mistakenly refer to it as Resnet-18 throughout both papers, so I wanted to clarify. Based on the final test accuracy of the network, it appears to be Resnet-20; if so, I'd urge you to call it as such and note in a parenthetical or footnote that it's the same network as in [1, 2] but with Frankle et al's mistaken name corrected.\", \"Are the values of s0 for CS sampled from a distribution, or are they all the same, fixed value?\", \"Do the extra parameters lead to longer wall-clock training times? If you are making an argument about a more efficient technique, this is an important consideration.\", \"Figure 2 includes the same graph twice. I believe a different graph should appear on the left, and I am eager to take a look at it in a revised version of the paper.\", \"It does not appear that multiple replicates of each experiment were run with different network initializations in Figure 3. There don't appear to be any error bars on Figure 3, suggesting that this only represents a single initialization. Considering the wide variance in performance achieved by continuous sparsification across runs in Figure 3 (right), this graph needs to include multiple runs and error bars. (I assume the multiple runs shown in Figure 3 (right) are with different hyperparameters?)\", \"What is the performance of CS as a pruning technique? By finding winning tickets, CS is also implicitly pruning the network. Is this competitive with L0 Regularization [4]? Is it more efficient than iterative magnitude pruning as in [3]? If this is a more efficient technique for finding winning tickets, it is also liable to be a more efficient pruning method, which seems like even broader impact for the proposed technique. Alternatively, if this technique is less effective than comparable work (especially [4]) as a pruning method, then it is possible comparable work (especially [4]) might also produce better winning tickets, in which case the importance of this work is diminished. In an updated version of the paper, I would be interested in seeing an evaluation of CS as a pruning technique independent of the lottery ticket hypothesis (and compared to standard techniques in the pruning literature as such). I have a hard time seeing any reason why new techniques for finding lottery tickets are any different than new pruning techniques, and they should be evaluated as such in the context of the broader literature on pruning.\", \"The last paragraph of Section 4.1 is missing important details that are necessary to evaluate the utility of continuous sparsification in comparison to IMP. \\\"In only two iterations, CS finds a ticket with over 77% sparsity...\\\" - for which hyperparameters, and how many hyperparameters had to be explored to find these values? How was the pareto curve obtained? How many different runs were necessary to create it? How many total epochs of training did it take to find that curve? If obtaining this pareto curve required many runs of CS, then it may not be any more efficient than running IMP in practice. The pareto curves appear to make CS look misleadingly effective, since they hide many of the actual costs involved (e.g., hyperparameter search, number of separate runs that were conducted to produce the curve, etc.). Greater transparency of this aspect of the paper would go a long way toward increasing my confidence that the findings are an improvement over IMP.\", \"The sparsity and accuracy of subnetworks found by CS appears to vary widely from run to run as shown in Figure 3 (right). The final sparsity appears to be a function of the values of s0, lambda, and beta, potentially along with luck from the optimization process. For the same values of s, lambda, and beta, how widely do the final sparsities and accuracies vary? If I wanted to find a winning ticket with a particular sparsity using CS, what would the procedure look like for doing so? Would I have to sweep across values of these hyperparameters, or is there a more straightforward way to do so? The practical usefulness of the procedure hinges on these questions.\", \"\\\"We associate the performance drop of highly sparse tickets found by our method from the second iteration onwards to the lack of weight rewinding.\\\" Why didn't you also try it with weight rewinding? That seems like an easy way to evaluate this hypothesis. For both networks, it would be interesting to see the performance of CS with and without rewinding (analogous to Appendix B in [1]).\", \"In the literature so far, the only winning tickets to be examined are those from IMP [1, 2]. CS is a different technique, and it likely finds different winning tickets. Do these winning tickets have different properties from [1, 2]? Can they be reinitialized? Can they be rewound earlier? These comparisons seem like an interesting scientific opportunity.\", \"There are a few additional comparisons that I think are vital to include in the paper to appropriately contextualize results. They're in the bullets below.\", \"First comparison: it would be useful to include random reinitialization or random pruning baselines as in [1, 2] simply to make it easier for the reader to contextualize the performance of other sparse subnetworks.\", \"Second comparison: what happens if you run IMP such that each iteration prunes to the same sparsity as achieved by each iteration of CS? Perhaps pruning by a fixed amount per iteration in IMP is wasteful, and one can prune more aggressively during earlier iterations as CS naturally appears to do. In other words, one way of explaining the advantage of CS would be that it prunes more aggressively. Is this indeed the case? I would be very curious to know.\", \"Third comparison: What are the range of results achieved if IMP and CS are run \\\"one-shot\\\" (pruning after just one iteration; in the case of IMP, pruning directly to a desired sparsity)? That is, how well can these techniques do with just a single iteration?\", \"WRITING\", \"The writing is excellent. The prose is clear, and I was able to fully understand a relatively sophisticated technique on the first read through the paper. Writing of this quality is rare, and the authors should be commended for it.\"], \"overall\": \"Weak Reject\\n\\nThe problem statement is that IMP is not efficient. The paper claims that CS is more efficient. However, the paper does not present a convincing case that CS is, on the whole, more efficient when taking into account hyperparameter search to get CS to work, hyperparameter search to target a particular sparsity, potential variance across runs of CS, potential additional training costs of CS, and the possibility that IMP might be able to work comparable well given a more aggressive pruning schedule.\\n\\nIn addition, the evaluation needs more experiments, including multiple replicates for each experiment and more networks (ideally one on ImageNet).\\n\\nFinally, I am unclear on what distinguishes CS from any other pruning technique, and it should be evaluated in the context of the broader pruning literature.\\n\\nIf the authors clarify that the overall cost of CS (including all of the factors listed above) is lower than for IMP, address technical concerns about the evaluation, and evaluate CS as a pruning technique in an updated version of the paper, I will update my score accordingly.\\n\\n[1] Frankle & Carbin. \\\"The Lottery Ticket Hypothesis.\\\" ICLR 2019.\\n[2] Frankle et al. \\\"Stabilizing the Lottery Ticket Hypothesis.\\\" Arxiv.\\n[3] Han et al. \\\"Learning both Weights and Connections for Efficient Neural Networks.\\\" NeurIPS 2015.\\n[4] Louizos et al. \\\"Learning Sparse Neural Networks through L0 Regularization.\\\" ICLR 2018\\n[5] Zhou et al. \\\"Deconstructing Lottery Tickets: Signs, Zeros, and the SuperMask.\\\" NeurIPS 2019.\\n\\n=====================================\\n\\nCOMMENT TO THE AC AFTER READING REBUTTALS AND REVISED PAPER\\n\\nTLDR\\n\\nI believe the technique is novel and should be published, regardless of whether it is actually more efficient than IMP for practical workflows.\\n\\nHowever, I believe the current evaluation is inadequate, both in the experiments and in the way data is presented. Namely, it is impossible to actually compare the costs of CS and IMP in the scenarios the authors evaluate, despite the fact that efficiency is the authors' claimed contribution. From the data as presented, it is unclear whether CS is actually more efficient than IMP for scientific use-cases. I'm not particularly concerned if there isn't an efficiency advantage - it's an innovative contribution regardless. My concern is that it is impossible to compare the costs of these techniques given the current presentation of data. That speaks to flaws in the evaluation section.\", \"i_therefore_maintain_my_score\": \"weak reject. The technique deserves to be published, but the paper in its current form does not.\\n\\nINTRODUCTION\\n\\nAfter reading the original submission, I had the following questions:\\n\\n1) Right now, the only way to find winning lottery tickets is through training the network and pruning it. IMP can be instantiated with any pruning technique, and the authors are really proposing a new pruning technique for use in the IMP framework. How does continuous sparsification perform as a pruning technique for network compression (i.e., independent of the lottery ticket hypothesis)?\\n\\n2) How efficient is continuous sparsification (CS) in the scientific use cases where one would seek to find winning lottery tickets? In my research experience, there are two such use cases: (a) producing a winning lottery ticket with a specific sparsity and (b) producing winning lottery tickets across the full range of different sparsities.\\n\\nIn their reubttal, the authors addressed these underlying questions:\\n\\nAS A PRUNING TECHNIQUE\\n\\n\\\"We have added new experiments showing that our method yields competitive results when pruning VGG trained on CIFAR, outperforming both magnitude pruning and stochastic l0 regularization\\\"\", \"my_response\": \"The initial results that the authors present are quite impressive on a VGG-style network for CIFAR-10. However, I find it concerning that the authors study pruning on this specific model but no others, particularly because they focus on a different network (namely, Resnet-20) in all other experiments in the paper. Since the authors are, in essence, proposing a new pruning technique, I would like to see it comprehensively evaluated as such on a range of networks against other baselines (as the authors do in Figure 4 for one network - those baselines are fine to me).\\n\\nEFFICIENCY OF CONTINUOUS SPARSIFICATION\\n\\n- It increases the cost of each individual network training run slightly: \\\"Continuous Sparsification on a 1080 Ti: our method resulted in 15% extra wall-clock time per training epoch.\\\"\\n\\n- PRODUCING WINNING TICKETS ACROSS THE FULL RANGE OF SPARSITIES: Throughout the paper, the authors present \\\"pareto curves\\\" showing the highest accuracy achieved by CS subnetworks at various sparsities. I find these pareto curves misleading: they hide the fact that CS had to be run several times with different hyperparameters to produce these curves.\\n\\nFor example, in Figure 6 (right), the authors produce the pareto curves by training Resnet-20 (by my count) 22 times (each point along the purple lines). The corresponding IMP curve appears to have 14 points. In other words, to get winning lottery tickets across all sparsities, CS must be run for more iterations than IMP - it appears to be less efficient, contradicting the authors' core claim. The best argument in favor of CS is that one could perform each of these runs in parallel if sufficient GPUs were available, meaning less wall-clock time would be required.\", \"one_caveat_to_this_analysis\": [\"the authors state that they run CS \\\"without rewinding,\\\" meaning that the second iteration of CS requires less training time than fully training the network (as IMP requires). The authors do not state how long they train when they *aren't* rewinding, so it is impossible to compare the efficiency of CS with IMP. All they say is that it \\\"allow[s] for even faster ticket search.\\\" They also do not study IMP without rewinding, which would be a helpful baseline for comparing to CS without rewinding.\", \"***In short, if the authors are making an argument that one technique is more efficient than the other on an epoch-for-epoch basis, they need to actually plot the epochs required by each technique.***\", \"PRODUCING A WINNING TICKET AT A SPECIFIC SPARSITY: My concern about this use case is that there is a non-intuitive relationship between the initialization of the sparsity parameters and the final sparsity of the network. As the authors state in the rebuttal: \\\"To achieve a desired sparsity, one can either perform runs in parallel with different values for s_0, or perform sequential binary search if the goal is to minimize the overall computational cost and not wall-clock time.\\\" In other words, there is no precise way to target a particular sparsity other than trying many hyperparameter configurations. The authors do not provide any concrete costs of using CS in this way in comparison to IMP, and - from a usability perspective - this is a challenging workflow.\", \"OTHER CONCERNS\", \"The authors only study CS on one toy network (the six-layer convolutional network, which - in my experience - is a particularly easy setting compared to deeper networks) and one \\\"real\\\" network (Resnet-20 on CIFAR-10). I would like to see results on other networks for CIFAR-10 (for example, the VGG network in Section 4.3).\", \"More importantly, I would like to see results on an ImageNet network. If CS makes finding winning lottery tickets more efficient as the authors claim, then finding winning tickets efficiently on an ImageNet network should be an excellent demonstration of their contribution. This scenario has stretched IMP to its breaking point, as the authors note.\", \"ARGUMENTS IN FAVOR OF ACCEPTING\", \"The authors propose a new pruning technique that appears to improve upon the increasingly popular L0-regularization technique.\", \"With the right hyperparameters, the proposed technique makes it possible to find winning lottery tickets more efficiently than existing methods (i.e., IMP with magnitude pruning).\", \"The winning tickets found by the proposed technique reach higher accuracy than IMP winning tickets and produce winning tickets at more extreme sparsities, improving upon our knowledge of the existence of winning lottery tickets.\", \"ARGUMENTS IN FAVOR OF REJECTING\", \"The authors perform only minimal evaluation of their method as a pruning technique. It is possible that this is a missed opportunity to show an additional contribution. It is also possible that other, existing pruning techniques outperform CS at both pruning and finding winning lottery tickets.\", \"The technique is hard to use for the two existing use cases for finding winning tickets. In particular, there is no way to search for winning tickets at a specific sparsity.\", \"It is unclear whether CS is actually more efficient than IMP on an epoch-for-epoch basis, even though this is the main claimed contribution. The authors do not disclose - let alone plot - the number of training epochs required to find (a) winning tickets at a specific sparsity and (b) winning tickets at a range of sparsities, so it is impossible to make these comparisons. Meanwhile, the pareto curves the authors present are misleading, since they are the amalgamation of many separate runs of CS.\", \"The authors study their technique on only one \\\"real\\\" network (Resnet-20) for finding winning tickets and a separate \\\"real\\\" network (VGG-19) for pruning. The authors do not show how CS performs in other challenging settings, especially ImageNet.\", \"This work is only valuable if we believe that \\\"lottery ticket hypothesis\\\" work is valuable. In other words, this is a narrow contribution to an already-narrow area of study. This is one reason why pitching CS as a pruning technique would make this a stronger paper. I personally believe that \\\"lottery ticket\\\" work is valuable area of study, but I understand that it may not be seen as such in the broader ICLR community.\", \"CONCLUSION\", \"I believe the technique deserves to be published regardless of whether it is actually more efficient than IMP. It is a novel contribution to both our knowledge of pruning and of finding winning lottery tickets. The paper is exceptionally well written, and - as such - I believe it will help to inspire other research in this area.\", \"However, I do not believe that the current paper - namely, the current evaluation - should be published. The paper presents no concrete data on the comparative costs of performing CS and IMP even though the core claim is that CS is more efficient. The paper does not disclose enough detail to compute these costs, and it seems like CS is more expensive than IMP for standard workflows. Moreover, the current presentation of the data through \\\"pareto curves\\\" is misleadingly favorable to CS.\", \"I also believe that the paper needs experiments on ImageNet and needs a more thorough evaluation as a pruning technique beyond the lottery ticket hypothesis.\", \"I therefore retain my current score of \\\"weak reject,\\\" though I am eager to hear the thoughts of other reviewers, and I am open to changing my score.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work propose a new iterative pruning methods named Continuous Sparsification. It will continuously prune the current weight until it reaches the target ratio instead of iterative prune the weight to specific ratio. The author gives a good analysis but the experiment is not yet convincing enough.\\n\\n1) This work actually presents compression algorithm with little connection with lottery ticket. As a lottery ticket discussion, it does not give the comparison between lottery ticket and random initialization based on new pruning method. As a pruning method, it does not show the results on common models like VGG and DenseNet with different depth. Figure. 3 only gives results of ResNet-18 while the setting is not the best setting. Normally we need to train at least 120 epochs. It also does not give the experiment on ImageNet. Thus making the conclusion less meaningful\\n\\n2) The author should also compare the continuous sparsification with one-shot pruning methods (non-iterative) to see the advantage of continuous sprsification.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel objective function that can be used to jointly optimize a classification objective while at the same time encourage sparsification in a network. The lottery ticket hypothesis and associated work shows that the iterative pruning of a network can lead to a sparse network that performs with high accuracy. On the other hand, the work of Zhou et al. shows that sparse masks (dubbed \\\"supermasks\\\") may be learned without training the parameters of the network. In a sense, this paper tries to combine these ideas by simultaneously training a network while also optimizing the mask.\\n\\nI think this paper serves as a reasonable contribution to the ever-growing \\\"lottery ticket hypothesis\\\" body of work. The paper is mostly clear, and the idea for joint optimization is very reasonable. It's not tremendously original (in that it basically combines two ideas that are already in the literature), but in spite of that, I still think this paper warrants being accepted to ICLR.\\n\\nFor me, the most interesting scientific point is about the issue of rewinding. In particular, the fact that continuous sparsification can find winning tickets without any parameter rewinding is fascinating and deserves further investigation. Do the authors have any sense for why this works, when prior work suggests that rewinding is necessary for sufficiently complicated models and datasets?\\n\\nA minor point, I think there's a typo on page 6 in that the paragraph beginning \\\"Results are presented in Figure 2\\\" both instances of \\\"SP\\\" should be \\\"SS\\\" instead.\"}"
]
} |
rylmoxrFDH | Critical initialisation in continuous approximations of binary neural networks | [
"George Stamatescu",
"Federica Gerace",
"Carlo Lucibello",
"Ian Fuss",
"Langford White"
] | The training of stochastic neural network models with binary ($\pm1$) weights and activations via continuous surrogate networks is investigated. We derive new surrogates using a novel derivation based on writing the stochastic neural network as a Markov chain. This derivation also encompasses existing variants of the surrogates presented in the literature. Following this, we theoretically study the surrogates at initialisation. We derive, using mean field theory, a set of scalar equations describing how input signals propagate through the randomly initialised networks. The equations reveal whether so-called critical initialisations exist for each surrogate network, where the network can be trained to arbitrary depth. Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance. This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to $\pm 1$, for deeper networks to be trainable. | [
"continuous approximations",
"surrogates",
"stochastic binary weights",
"critical initialisation",
"binary neural networks",
"training",
"binary",
"weights"
] | Accept (Poster) | https://openreview.net/pdf?id=rylmoxrFDH | https://openreview.net/forum?id=rylmoxrFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"V5p4pQNXef",
"BJxzjHA9or",
"ryeDYCQKjr",
"B1lIUoQKoH",
"SylTKmmYjB",
"r1xtybOgir",
"SkxNKaup9B",
"HkeufHm1cB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750593,
1573737882290,
1573629566749,
1573628749561,
1573626756837,
1573056736564,
1572863356031,
1571923215568
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2503/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2503/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2503/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2503/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2503/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors study neural networks with binary weights or activations, and the so-called \\\"differentiable surrogates\\\" used to train them.\\nThey present an analysis that unifies previously proposed surrogates and they study critical initialization of weights to facilitate trainability.\\n\\nThe reviewers agree that the main topic of the paper is important (in particular initialization heuristics of neural networks), however they found the presentation of the content lacking in clarity as well as in clearly emphasizing the main contributions. \\nThe authors imporved the readability of the manuscript in the rebuttal.\\n\\nThis paper seems to be at acceptance threshold and 2 of 3 reviewers indicated low confidence.\\nNot being familiar with this line of work, I recommend acceptance following the average review score.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for revisiting the paper. While it has indeed improved, I believe my original rating still applies and I've decided to retain it.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your comments, and specific advice.\\n\\nWe agree with the emphasis on the language of mathematical papers, and have followed your advice, including in the revision Definitions, Assumptions, Claims and Proofs.\\n\\nWe hope this reorganisation of sections 2 and 3 under this advice have made the paper more readable. \\n\\nWe also hope you find that 'value added' to be very clear from the revised introduction (at a big picture) and that throughout the paper the the original contributions to be more clear.\\n\\nAs a final remark, we thank the reviewer for their concrete advice. This prompted us to reconsider several contributions that were originally downplayed in the first submission. Briefly these are:\\n- novel derivations of both surrogates based on Markov chain representation\\n- new reparameterisation trick surrogate for stochastic binary weights *and* neurons\\n- correct backpropagation for the deterministic surrogate (not truncated as in Soudry et al.)\\n- derivation of critical initialisation for continuous neuron-stochastic binary weight networks\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your comments and specific issues raised.\", \"with_regards_to_the_following_comments\": \"\\\"The authors provide concrete advice: the mean of the stochastic weights should be close to +/-1 at initialisation. Being able to give such advice to ML practitioners is of great value, but since this advice feels counter intuitive (naively, the mean of a binary +/-1 activation is typically the output of a tanh, and initialising this close to +/-1 means that gradient are going to be roughly 0, and can easily be exactly 0 in low precision computes).\\\"\\n\\n- You are correct that the advice is to initialise the *means* of the stochastic *weights* close to +/-1. Your intuition on activations, such as tanh, saturating and thus producing zero gradients is also right.\\n However, we do *not* initialise activations!\\nThe theory developed by Poole et al. and Schoenholz et al. actually addresses the saturation of activations, in a more general framework - either looking at forward propagation (we stay (at initialisation) on the linear parts of the activation), or the equivalet of controlling the Jacobian's average (squared) singular values - if their average =1 (= chi_1), this means there can be no saturation or zero gradient.\\nCritical initialisations means precisely this setting (chi_1 =1) .\\n\\n\\\"Right now, the justification of this initialisation is hard to find in the paper. This point would deserve a dedicated section, giving a summary of the argument, with references to more technical parts of the paper\\\"\\n\\n-Thank you for raising this, it is an important point, and a good suggestion. \\n- In the opening of section 3 we have described this idea, pointing to Claims 1 and 3, which establish the +/-1 initialisation of the weights' means. \\n- We also emphasise this advice in the introduction and discussion. The paper also has an outline in the introduction which points to these sections.\\n- More intuitive explanations (as discussed in our response to your question) are provided in Poole and Schoenholz, and we expect readers would follow this up. Unfortunately we find we are limited for space, otherwise we would include it. We rate these explanations quite highly, and agree with your concern.\\n\\n\\n***Response to specific issues:***\\n\\n''Missing letters, repeated words or even missing figures (Fig 5 and 7 in the appendix).\\\"\\n- we agree, and have tidied up the paper in this regard.\\n\\n''Authors need to explain what is Edge of Chaos and give some reference\\\"\\n- We have removed the phrase ''edge of chaos\\\", since it is equivalent to the set of (sigma_m, sigma_b) which critically initialise the network. This is clear from Definition 1, and claims 1,2,3, in section 3 of the paper. \\nFor further information, the term 'edge of chaos' refers to the marginal stability of the c*=1 fixed point of the correlation map c(c^{l-1}) -figure 1.c). If chi=1 at c*=1, the fixed point is marginally stable. This perspective is discussed in Poole et al, Schoenholz et al. and more recently in Hayou et al. We include this discussion in the appendix.\\n\\n\\n''Please, better explain figures, for example Figure 1: need to explain better what this plot is. Why are there both dotted and solid lines? What are the shaded regions: is that some confidence interval? But which one, and how many experiments were used for those plots?\\\"\\n\\n- The main text now describes the dotted lines (empirical means) and solid line (theory), and the shaded region corresponding to the simulations falling under one standard deviation.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your comments and specific questions.\\n\\nWe have revised the paper considerably, in particular the Introduction and Sections 2 and 3. We have also included updated experiments. \\n\\nAll assumptions are clearly stated, and justified directly. For clarity, we have also reduced the number of equations.\\n\\nIn terms of contributions, we highlight these in the new introduction:\\n\\n- we have reconsidered the novelty of our Markov chain based derivation of *both* surrogates, since it is considerably simpler than previous works, and since it also allows for local-reparameteristion-trick (LRT) to be used for networks with stochastic weights *and* neurons. This is a new algorithm.\\n- Also, we make clear the limitations of previous works, eg. Soudry et al. did not backpropagate correctly, ignoring all 'variance terms' (the denominators in Eq 11).\\n\\n***Response to specific questions:***\\n''It might be worth double checking the equation ...\\\"\\n- thank you for picking this up, this is now Equation (4) in the updated paper. There is no S^0, the matrices S^l range from $l =1,2,3..,L$.\\n\\n''In eq. (7) (8) why use the definition symbol := ?\\\"\\n-we have removed this notation.\\n\\n''At the beginning of section 3.1, please indicate what \\u201cmatcal(M)\\u201d precisely refers to\\\"\\n- We agree the previous notation was poor. This is now in section 3.2, and we believe we have made this much clearer.\\n\\n''Just after eq. (9), please explain what Xi_{c*} means. \\\"\\n- This is now after Equation 10. We have cleared defined this notation, as well as using it in Definition 1.\\n\\n''Small typo:...\\\"\\n- thank you.\\n\\n''In section 5.2, why reducing the training set size to 25% of MNIST?\\\"\\n- Since we study trainability, we wish only for the neural network to fit a training set - we are unconcerned with overfitting. Our new experiments run on MNIST 50%, but our computational resources are limited.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"The paper provides an in-depth exploration of stochastic binary networks, continuous surrogates, and their training dynamics with some potentially actionable insights on how to initialize weights for best performance. This topic is relevant and the paper would have more impact if its structure, presentation and formalism could be improved. Overall it lacks clarity in the presentation of the results, the assumptions made are not always clearly stated and the split between previous work and original derivations should be improved.\\n\\nIn particular in section 2.1, the author should state what exactly the mean-field approximation is and at which step it is required (e.g. independence is assumed to apply the CLT but this is not clearly stated). Section 3 should also clearly state the assumptions made. That section just follows the \\u201cbackground\\u201d section where different works treating different cases are mentioned and it is important to restate here which cases this paper specifically considers. Aside from making assumptions clearer, it would be helpful to highlight the specific contributions of the paper so we can easily see the distinctions between straightforward adaptations of previous work and new contributions.\", \"specific_questions\": \"It might be worth double checking the equation between eq. (2) and eq. (3) , the boundary case (l=0) does not make sense to me, in particular what is S^0 ?.\\n\\nWhat does the term hat{p}(x^l) mean in the left hand side of eq.(3)? \\n\\nIn eq. (7) (8) why use the definition symbol := ?\\n\\nAt the beginning of section 3.1, please indicate what \\u201cmatcal(M)\\u201d precisely refers to. Using the term P(mathcal(M) = M_ij) does not make much sense if the intent is to use a continuous distribution for the means. \\n\\nJust after eq. (9), please explain what Xi_{c*} means.\", \"small_typo\": \"Eq. (10) is introduced as \\u201ccan be read from the vector equation 31\\u201d, what is eq. (31)?\\n\\nIn section 5.2, why reducing the training set size to 25% of MNIST?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper, the authors investigate the training dynamics of binary neural networks when using continuous surrogates for training. In particular, they study what properties the network should have at initialisation to best train. They do so via mean field approximations in the limit of very wide networks.\", \"the_authors_provide_concrete_advice\": \"the mean of the stochastic weights should be close to +/-1 at initialisation. Being able to give such advice to ML practitioners is of great value, but since this advice feels counter intuitive (naively, the mean of a binary +/-1 activation is typically the output of a tanh, and initialising this close to +/-1 means that gradient are going to be roughly 0, and can easily be exactly 0 in low precision computes). Right now, the justification of this initialisation is hard to find in the paper. This point would deserve a dedicated section, giving a summary of the argument, with references to more technical parts of the paper.\", \"the_presentation_should_also_be_improved_as_we_can_find_the_following_issues\": [\"Missing letters, repeated words or even missing figures (Fig 5 and 7 in the appendix).\", \"Authors need to explain what is Edge of Chaos and give some reference (for example C. G. Langton. Computation at the edge of chaos. Physica D, 42, 1990). This will make the paper more accessible to researchers less familiar with the theory but interested in its practical applications.\", \"Please, better explain figures, for example Figure 1: need to explain better what this plot is. Why are there both dotted and solid lines? What are the shaded regions: is that some confidence interval? But which one, and how many experiments were used for those plots?\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper addresses a very important and relevant topic of initialisation of weights of neural networks. It builds up on highly celebrated results of Poole, Schoenholz and others, using the language of mean field theory and an approach rooted in Dynamical Systems.\\nWhat the authors propose is an extension of the approach to other settings. The paper is very scientific and math-heavy. A good practice in such cases is to adhere to a format of a scientific mathematical paper and organise the material using Theorema, Lemmata, Propositions and Corollaries , Definitions and Proofs. Such language and framework exists for a reason - to structure the material and make the paper readable. The paper as is a stream of equations and discussion making it very unclear what the point is.\", \"in_order_for_this_paper_to_be_suitable_for_publication_the_reviewer_would_like_to_strongly_suggest\": [\"Organise the material in a way that would make it clear what is claimed, what is proven etc.\", \"Make more specific what the added value of the paper is.\", \"Also - for the contribution of the paper to be less incremental it would be valuable to add more formality to the original results, for example - Gaussian approximation is claimed without any sort of verification of assumptions of any version of CLT or Law of Large Numbers.\"]}"
]
} |
H1lMogrKDH | LEARNING DIFFICULT PERCEPTUAL TASKS WITH HODGKIN-HUXLEY NETWORKS | [
"Alan Lockett",
"Ankit Patel",
"Paul Pfaffinger"
] | This paper demonstrates that a computational neural network model using ion channel-based conductances to transmit information can solve standard computer vision datasets at near state-of-the-art performance. Although not fully biologically accurate, this model incorporates fundamental biophysical principles underlying the control of membrane potential and the processing of information by Ohmic ion channels. The key computational step employs Conductance-Weighted Averaging (CWA) in place of the traditional affine transformation, representing a fundamentally different computational principle.
Importantly, CWA based networks are self-normalizing and range-limited. We also demonstrate for the first time that a network with excitatory and inhibitory neurons and nonnegative synapse strengths can successfully solve computer vision problems. Although CWA models do not yet surpass the current state-of-the-art in deep learning, the results are competitive on CIFAR-10. There remain avenues for improving these networks, e.g. by more closely modeling ion channel function and connectivity patterns of excitatory and inhibitory neurons found in the brain. | [
"conductance-weighted averaging",
"neural modeling",
"normalization methods"
] | Reject | https://openreview.net/pdf?id=H1lMogrKDH | https://openreview.net/forum?id=H1lMogrKDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KXxJ4hX_lo",
"HJePAd92jH",
"Syx2FO9nor",
"SkxFHOq3iB",
"HJgyLhgI9r",
"BkgeFBvRYS",
"BJl5fg_6FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750563,
1573853391275,
1573853315832,
1573853249102,
1572371527114,
1571874167764,
1571811346142
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2501/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2501/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2501/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2501/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2501/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2501/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies non-spiking Hudgkin-Huxley models and shows that under few simplifying assumptions the model can be trained using conventional backpropagation to yield accuracies almost comparable to state-of-the-art neural networks. Overall, the reviewers found the paper well-written, and the idea somewhat interesting, but criticized the experimental evaluation and potential low impact and interest to the community. While the method itself is sound, the overall assessment of the paper is somewhat below what's expected from papers accepted to ICLR, and I\\u2019m thus recommending rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"(c) Presented results appear to be important from the point of view of someone who wants to transfer insights from biology into the field of deep learning. But there might be an extent to what is achievable given a simple goal of optimizing a supervised accuracy of an artificial neural network trained using gradient descent (especially considering limitations imposed by hardware). I am optimistic about the prospect of knowledge transfer between these disciplines, but it is my feeling that the study of temporal dynamics, emergent spatiotemporal encodings, \\\"training\\\" process of a biological neural system, etc. have potentially much more to offer to the field of machine learning. These questions do appear to be incredibly complex though and the steady-state analysis is definitely a prerequisite.\", \"response\": \"Yes, the last point seems to be key. A better understanding of the brain should lead to better machine learning, but the path to get there may not be direct. At present deep learning seems to outstrip neuroscience in terms of generating intelligent behavior, and the question is why? Answering this question requires putting neuroscience into a setting where it can be compared directly with machine learning, and that is why we have pursued the question of performance on benchmarks: it should raise questions for neuroscientists that help to address the gaps that exist at present. The identification and understanding of these gaps should lead to gains in the field of neuroscience, which could then benefit machine learning as well. The present study is a somewhat humble step in this direction, but it does offer some novel components, including as far as we are aware the first network with partitioning among excitatory and inhibitory neurons that performs acceptably on benchmark tasks. We are working on temporal dynamics, but there the modeling and computation become quite a bit more complex, which is why this present research is a necessary first step.\"}",
"{\"title\": \"Responses to Reviewer 2\", \"comment\": \"They compare nothing with other SNN type of model on other truly difficult perceptual tasks.\", \"response\": \"As a first point, CWA is not a competitor to SNN models; at present, we are not aware of results in spiking neural networks that incorporate conductance averaging, which does happen in biological neurons, other than highly complex simulations in specialized programs such as NEURON and GENESIS, which have not been easy to optimize or to use to solve high-dimensional perceptual tasks. If fact, we do see a clear path by which CWA could be combined with spiking networks, or used in neuromorphic chip design in future work. CWA as formulated in this paper is static and not temporal, as can be seen from the implementing code we have added to the supplementary materials. However, the paths to add temporal features to the network is clear from the Hodgkin and Huxley model and will be a focus of our future work.\\n \\nWe did not compare with SNNs because there is no simple, controlled comparison there. Biological neurons operate on spike trains, and they also perform conductance averaging. There simply is no conflict, and no comparison would be fair without also including spiking networks that incorporate conductance averaging, which we have not yet explored. \\n \\nThere are certainly many opportunities to include spiking with this CWA work as future work, but the point here is to merely demonstrate that conductance averaging works competitively.\"}",
"{\"title\": \"Responses to Reviewer 1\", \"comment\": \"Essentially, the CWA changes the definition of a layer in a neural net. Do authors see a path from \\u201cCWA works\\u201d to \\u201cCWA works better than affine?\\u201d.\", \"response\": \"Although an interesting question, this has not been our main focus and we see no such path at this time, though there may be one. One possibility is to consider more diverse patterns of connectivity, using the brain as a guide. Inhibitory and excitatory connections are not equally treated in the brain, but appear in specific patterns that we have not replicated in this work. Furthermore, networks in the brain generally have more connections, which may make the CWA normalization less sensitive to random variations in the input. Also, if neuroscientists are going to create more realistic hybrid biological neuronal models that do useful things they will have to incorporate CWA principles to be functioning like real neurons; thus CWA is intended as an example for use by neuroscientists. CWA may also prove useful for physical implementation in neuromorphic chips as well, since it can be reduced to a basic circuit without a need for an ALU.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I wanted to first thank the authors for their work on connecting biologically inspired CWA to deep learning. The paper overall reads nicely.\\n\\nThe authors make the assumption that reviewers are fully aware of biological terminology related to CWA, however, this may not be the case. It took a few hours of searching to be able to find definitions that could explain the paper better. I suggest the authors add these definitions explicitly to their paper (perhaps in supplementary), also an overview figure would go a long way. I enjoyed reading the paper, and before making a final decision (hence currently weak reject), I would like to know the answer to the following questions:\\n\\n1. Have the authors considered the computational burden of Equation 3? In short, it seems that there are two summations (one for building the probability space over measure h) and one right before e_j. This is somewhat important, if this type of neural network is presented as a competitor to affine mapped activations. \\n\\n2. It would be nice to have some proof regarding universal approximation capabilities of CWA. In my opinion it is, but a proof would be nice (however redundant or trivial - simply use supplementary). \\n\\n3. I was a bit confused to see CWA+BN in the Table 1. In introduction, authors write \\u201cBut CWA networks are by definition normalized and range-limited. Therefore, one can conjecture that CWA plays a normalization role in biological neural networks.\\u201d Therefore, I was expecting CWA+BN to work similarly as CWA for CIFAR10. Please elaborate further on this note. \\n\\n4. Essentially, the CWA changes the definition of a layer in a neural net. Do authors see a path from \\u201cCWA works\\u201d to \\u201cCWA works better than affine?\\u201d. If so, please elaborate. Specifically, I am asking this question \\u201cWhy should/must we stop using affine maps in favor of CWA?\\u201d. Now this may or may not be the claim of the paper. It\\u2019s ok if it is not; still showing competitive performance is somewhat acceptable, but certainly further insight would make the paper stronger.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper focuses on non-spiking Hudghkin-Huxley model, which is different from existing works on spiking neural network-based Hudghkin-Huxley model.\\n\\nThere are many ways of using neuron firing model as unit to construct neural networks. They choose a specific way (mentioned above). I think the most interesting part would be the CWA method which achieves the normalization. \\n\\nThey have a fair list of literature in spiking neural networks. But I find the way they illustrate the difference between their model and other models is insufficient. They should focus on the model-wise difference, instead of focusing on whether it\\u2019s applied to MNIST or not or what\\u2019s the accuracy. \\n\\nThey don\\u2019t include any other SNN model in the paper for experimental comparison. They also mention a few SNN works that work well on MNIST in the related work section which actually have better accuracies than their model. So it is inappropriate to say this proposed method is a state-of-art neuro-inspired method, Because others perform well on MNIST as well, and their limited experiments only investigate MNIST and CIFAR-10, which are less interesting generally. \\n\\nCWA cannot outperform Affine+BN. \\n\\nOverall, the idea is somehow interesting, but the experiments are weak. Applying the method to MNIST and CIFAR-10 is far from being called either \\u201cinteresting computer vision applications\\u201d or \\u201cdifficult perceptual tasks\\u201d. They only use perceptual in the title, but the applications are MNIST and CIFAR-10. It feels like they want to learn something big, but they only focus on benchmark datasets. \\n\\nThey compare nothing with other SNN type of model on other truly difficult perceptual tasks.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a novel neural network architecture inspired by the analysis of a steady-state solution of the Hodgkin-Huxley model. Using a few simplifying assumptions, the authors use conventional backpropagation to train DNN- and CNN-based models and demonstrate that their accuracies are not much lower than the state-of-the-art results.\\n\\nThe paper is well-written, sufficiently detailed and understandable. Derived self-normalizing Conductance-Weighted Averaging (CWA) mechanism is interesting in itself, especially contrasting CWA results with those obtained for the non-Batch-Normalized networks. It is also inspiring to see that this model can be derived based on a relatively accurate biological neuron model.\\n\\nMy main question is actually related to the potential impact of this study. I am curious about the implications and the ways in which these results can inspire other researchers.\\n\\nAfter reading the paper, I got an impression that:\\n\\n(a) From the point of view of a machine learning practitioner, these results may not be particularly impressive. They do hint at the importance of self-normalization though, which could potentially be interesting to explore further.\\n\\n(b) From the point of view of a neuroscientist, the proposed model might be too simplistic. It is my understanding, that neural systems (even at \\\"rest\\\") are inherently non-equilibrium (and I assume the presence of simple feedback loops could also dramatically change the stead-state of the system). Is it possible that something similar to this \\\"steady-state inference\\\" mode could actually take place in real biological neural systems?\\n\\n(c) Presented results appear to be important from the point of view of someone who wants to transfer insights from biology into the field of deep learning. But there might be an extent to what is achievable given a simple goal of optimizing a supervised accuracy of an artificial neural network trained using gradient descent (especially considering limitations imposed by hardware). I am optimistic about the prospect of knowledge transfer between these disciplines, but it is my feeling that the study of temporal dynamics, emergent spatiotemporal encodings, \\\"training\\\" process of a biological neural system, etc. have potentially much more to offer to the field of machine learning. These questions do appear to be incredibly complex though and the steady-state analysis is definitely a prerequisite.\"}"
]
} |
SkxMjxHYPS | Filter redistribution templates for iteration-lessconvolutional model reduction | [
"Ramon Izquierdo Cordova",
"Walterio Mayol Cuevas"
] | Automatic neural network discovery methods face an enormous challenge caused for the size of the search space. A common practice is to split this space at different levels and to explore only a part of it. Neural architecture search methods look for how to combine a subset of layers, which are the most promising, to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases the exploration is made iteratively, training models several times during the search. Inspired by the advantages of the two previous approaches, we proposed a fast option to find models with improved characteristics. We apply a small set of templates, which are considered promising, for make a redistribution of the number of filters in an already existing neural network. When compared to the initial base models, we found that the resulting architectures, trained from scratch, surpass the original accuracy even after been reduced to fit the same amount of resources. | [
"Model reduction",
"Pruning",
"filter distribution"
] | Reject | https://openreview.net/pdf?id=SkxMjxHYPS | https://openreview.net/forum?id=SkxMjxHYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"LO6qrJtyCZ",
"B1gK8nS3iH",
"rkxUzpNnjr",
"BJxvypEhjH",
"SkeCnhVhsr",
"B1gb9nE3sr",
"B1x3Sh43sS",
"SJxf1Z_85B",
"BklbdaJLqH",
"BkxV6MbAYH",
"SJeRVvEU_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750535,
1573833809433,
1573829902310,
1573829854891,
1573829814314,
1573829768859,
1573829699879,
1572401370291,
1572367721498,
1571848892020,
1570289462431
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2500/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2500/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2500/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2500/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2500/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2500/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper examines how different distributions of the layer-wise number of CNN filters, as partitioned into a set of fixed templates, impacts the performance of various baseline deep architectures. Testing is conducting from the viewpoint of balancing accuracy with various resource metrics such as number of parameters, memory footprint, etc.\\n\\nIn the end, reviewer scores were partitioned as two accepts and two rejects. However, the actual comments indicate that both nominal accept reviewers expressed borderline opinions regarding this work (e.g., one preferred a score of 4 or 5 if available, while the other explicitly stated that the paper was borderline acceptance-worthy). Consequently in aggregate there was no strong support for acceptance and non-dismissable sentiment towards rejection.\\n\\nFor example, consistent with reviewer comments, a primary concern with this paper is that the novelty and technical contribution is rather limited, and hence, to warrant acceptance the empirical component should be especially compelling. However, all the experiments are limited to cifar10/cifar100 data, with the exception of a couple extra tests on tiny ImageNet added after the rebuttal. But these latter experiments are not so convincing since the base architecture has the best accuracy on VGG, and only on a single MobileNet test do we actually see clear-cut improvement. Moreover, these new results appear to be based on just a single trial per data set (this important detail is unclear), and judging from Figure 2 of the revision, MobileNet results on cifar data can have very high variance blurring the distinction between methods. It is therefore hard to draw firm conclusions at this point, and these two additional tiny ImageNet tests notwithstanding, we don't really know how to differentiate phenomena that are intrinsic to cifar data from other potentially relevant factors.\\n\\nOverall then, my view is that far more testing with different data types is warranted to strengthen the conclusions of this paper and compensate for the modest technical contribution. Note also that training with all of these different filter templates is likely no less computationally expensive than some state-of-the-art pruning or related compression methods, and therefore it would be worth comparing head-to-head with such approaches. This is especially true given that in many scenarios, test-time computational resources are more critical than marginal differences in training time, etc.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Issues Addressed\", \"comment\": \"References to prior work and architectures now give a better motivation and background to this work. Having results averaged over multiple runs now gives me more confidence in the results, but these should be included in the tables, not just the plots. For the plots, shaded error envelopes would make determining the overlap in performance easier to distinguish (as compared to error bars) - particularly with MobileNet. The inclusion of TinyImagenet also improves the empirical results - extending results to include the other configurations should be done for the final version though. As the authors have largely addressed my concerns I will update to accept.\"}",
"{\"title\": \"Re: All reviewers\", \"comment\": \"We thank our reviewers for their valuable comments. We have thoroughly improved the document based on your guidance. We believe we have addressed all the issues raised in our response below, including the addition of further evaluation to demonstrate the applicability and generalisation of our approach. We address each issue individually.\"}",
"{\"title\": \"Re: Official Blind Review #4\", \"comment\": \"We provide additional comparative results in Tables 1 and 2 for tiny-imagenet dataset (thank you for the suggestion) which is a 200 classes subset from Imagenet. The results confirm some models benefit by redistributing their filters with our templates.\\n\\nOur aim is to provide an insight about the benefits that a redistribution of filters in a model could produce but further exploration should be done. The architecture-task pair is likely to be unique with respect to the best template that works for it. While this may sound intractable, the small number of templates here proposed already offer a fast and iteration-less alternative to architecture search or optimisation which can deliver better performance straight away.\\n\\nIn sections 1 and 3 we mention two of the reasons to choose some of our templates. But to clarify, the first if the original Neocognitron architecture (uniform template) and the second is the patterns observed from some papers, particularly MorphNet where at least three behaviours are present in different blocks from ResNet101 model: 1) filters increase in deeper layers, 2) filters agglomerate in the centre and 3) filters are reduced in the centre of the block. Smallify shows also pattern number 2 for a vgg model. \\nWe already clarified this motivation and inspiration from these sources.\\n\\nRegarding the number of filters within Inception modules, we changed the distribution of channels by keeping a constant number of filters for each filter size, except Reverse-Base template which have the original distribution inside each module but assigned inversely, that is the first module has the distribution of the last one and vice versa. This was an unexpected behavior as we are disturbing the model design in a key aspect, as the reviewer mentioned, and still the model performance remains similar in some datasets.\\nWe already clarified the existing description of how we distributed channel sizes in section 3.\", \"we_present_our_work_as_both_insight_and_as_a_methodology_in_the_sense_that_an_improvement_in_the_performance_of_an_existing_architecture_can_be_made_by_following_a_set_of_straight_forward_steps\": \"take a base mode, apply the template and reduce the model proportionally according to a budget restriction before training the model again. Our main aim is to highlight the importance of filter distributions, provide an additional design methodology and motivate further exploration in this space.\\n\\n\\nWe have changed the color palette and provide labels for models and datasets in each plot.\"}",
"{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"We are aware that, while our templates are heuristically chosen, they open avenues for insight for designing models and further work on finding what other templates to be used. Our aim is to show the benefits that a one-shot redistribution of filters in a model could produce. The architecture-task pair is likely to be unique with respect to the best template that works for it. While this may sound intractable, the small number of templates here proposed already offer a fast and iteration-less alternative to architecture search or optimisation which can deliver better performance straight away.\\n\\nIn sections 1 and 3 we expanded on the reasons to choose some of our templates. But to clarify, the first is the original Neocognitron architecture (uniform template) and the second is the patterns observed from some papers, particularly MorphNet where at least three behaviours are present in different blocks from ResNet101 model: 1) filters increase in deeper layers, 2) filters agglomerate in the centre and 3) filters are reduced in the centre of the block. Smallify shows also pattern number 2 for a vgg model. \\nWe already clarified this motivation and inspiration from these sources.\\n\\nWe have now included error bars, changed the color palette and provide labels for models and datasets in each plot.\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"In section \\u201cTemplate effect with similar resources\\u201d we provide several experiments consisting in changing proportionally the width of every model after applying a template, in order to increase and decrease its parameters/FLOPs.\\n\\nOur results are consistent with the idea that a model with more parameters (and/or more FLOPs) within the same architecture family gives better performance but what we also show is that using the same parameters with two different templates in the same model delivers different accuracy.\\n\\nAs per R4 suggestion we include results for TinyImagenet that strengthen the approach\\u2019s applicability.\\n\\nWe have thoroughly changed description and wording.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"As R1 points out, exploring over all possible templates is intractable, even when many methods (\\u201cNeural Architecture Search: A Survey\\u201d) explore a reduced space, the amount of computational resources is highly costly. Our assumption is that there exists a small set of filter distributions that we called templates, that are distinct and easy to apply and able to promote improvements to existing deep network models. We have extended this explanation about the justification of using templates in section 3.\\n\\nIn figure 1, we show the distribution of filters for the four architectures tested in the experiments. All of them follow the same pattern of increasing filters in deeper layers. We highlight that there is no universal distribution of filters that works well for every model-task pair. By using the proposed templates we offer a fast and iteration-less alternative to architecture search which can deliver better performance straight away.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper presents a simple methodological study on the effect of the distribution of convolutional filters on the accuracy of deep convolutional networks on the CIFAR 10 and CIFAR 100 data sets. There are five different kind of distributions studied: constant number of filters, monotonically increasing and decreasing number of filters and convex/concave with a local extremum at the layer in the middle. For these distributions, the total number of filters is varied to study the trade-off between running-time vs. accuracy, memory vs. accuracy and parameter count vs. accuracy.\", \"Although the paper is purely experimental without any particular theoretical considerations, it presents a few surprising observations defying conventional wisdom:\", \"The standard method of increasing the number of filters as the number of convolutional nodes is increasing is not the most optimal strategy in most cases.\", \"The optimal distribution of channels is highly dependent on the network architecture.\", \"Some network architectures are highly stable with respect to the distribution of channels, while others are very sensitive.\", \"Given that this paper is easy to read and presents interesting insights for the design of convolutional network architectures and challenges mainstream views, I would consider it to be a generally valuable contribution, at least I enjoyed reading it.\", \"Despite the intriguing nature of this paper, there are several weaknesses which make me less enthusiastic about the quality of the paper:\", \"The experiments are done only on CIFAR-10 and CIFAR-100. These benchmarks are somewhat special. It would be useful to see whether the results also hold for more realistic vision benchmarks. Even if running all the experiments would be costly, I think that at least a small selection should be reproduced on OpenImages or MS-Coco or other more realistic benchmarks to validate the findings of this paper.\", \"It would be interesting to see whether starting from the best channel distributions, applying MorphNet would end up with different distributions. In general: whether MorphNet would end up with similar distributions automatically.\", \"The paper does not clarify how the channel sizes for Inception were distributed, since proper balancing of the 1x1 and more spread out convolutions is a key part of that architecture. This is not clarified in this paper.\", \"The grammar of the paper is poor, even the abstract is hard to read and interpret.\", \"The paper presents itself as a methodology for automatically generating the optimal number of channels, while it is more of a one-off experiment and observation than a general purpose method.\"], \"another_small_technical_detail_regarding_the_choice_of_colors_in_the_diagrams\": \"the baseline distribution and constant distribution are very hard to distinguish. This is especially critical because these are the two best distributions on average. Also the diagrams could benefit from more detailed captions.\\n\\nThe paper presents interesting, valuable experimental findings, but it is not extremely exciting theoretically. Also its practical execution is somewhat lacking. If it contained at least partial results on more realistic data sets, I would vote for strong accept, but in its current form, I find it borderline acceptance-worthy.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigates the impact of several predefined filter templates, including uniform template, reverse template, quadratic template, and negative quadratic template, to the performance of different neural networks such as VGG-19, ResNet-50, Inception Network, and MobileNet.\\n\\nThis paper uses some templates. However, it is impossible to enumerate all possible templates and hence some good templates may not be included and studied in this paper, which makes the empirical studies in this paper less useful.\\n\\nIn experiments, authors need to compare with the results of neural architecture search. Based on such comparison, we can see whether the templates used in this paper are reasonable.\\n\\nIn experiments, it seems that different templates may have their own characteristics and their usefulness also depends on the neural network used. So it is not easy to make general conclusion.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper changes the distribution of number of filters (called \\u201cfilter distribution template\\u201d, or \\\"template\\\") at each layer in modern deep Conv models (e.g., VGG, Inception, ResNet) and discover that the model with unconventional (e.g., reverse base, quadratic) template sometimes outperform conventional one (when f_{l+1} = 2 f_l).\\n\\nOne big issue of this paper is that it didn't mention any theoretical reason why the total number of filters is the decisive factor for the test performance. It is not justified at all and the empirical result is also mixed (See Table 1). This brings about the question mark of the motivation of this paper from the first place. In contrast, empirically people have observed that a model with more parameters (and/or more FLOPs) within the same architecture family gives better performance. This is not directly related to the total number of filters, which the main topic in the paper. \\n\\nAs a result, it is not clear whether a gain of the performance is simply due to the change of #parameters/FLOPs or due to the fact that different distribution templates are used. As shown in Table. 2, there is huge variation in terms of #parameters and FLOPs between different versions of the same network, making the comparison fairly difficult and inconclusive. I would strongly suggest the authors to compare the performance between different templates when keeping #parameters and/or FLOPs fixed. This should be easy to do by computing how many filters are needed per layer to reach the desired #parameters/FLOPs, while keeping the desired distribution. \\n\\nAlso, to make a strong conclusion, they paper should also report ImageNet results trained with different templates. \\n\\nOverall, the paper, in its current form, is not ready for publication and I would vote for rejection. \\n\\n=========Post Rebuttal=======\\nI reread the paper after authors revision and rebuttal. Thanks authors for the hard work. \\n\\nIndeed the authors have compared different templates when the number of parameters remain approximately the same (in both the original version and the new revision). I overlooked it and apologize. \\n\\nHowever, after rereading the paper, the conclusion is still not that clear and I didn't see a clear take home message about which filter template is better than the other. In the section \\\"Template Effects with similar resources\\\", it seems that uniform template patterns is the best for many models, which somehow is negative results given the motivation of this paper. \\n\\nIn addition, when comparing Fig. 3 in the original version versus that (Fig. 3) in the revision, some curves have changed their shape drastically (e.g., MobileNet on CIFAR10 and CIFAR100) and uniform template shows stronger dominance. This worries me a bit that the experiments might still be preliminary and the paper is yet not ready for publication. \\n\\n I will keep the score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The search/design space of neural network architectures is vast, so researchers tend to use simple heuristics to guide their designs; alternatively neural architecture search methods may minimise heuristics in order to remove bias within the search. The authors propose applying a few simple heuristics in the form of \\\"templates\\\" for the number of convolutional filters across the different layers within an architecture. Apart from reversing the filter distribution, which starts a network with a large amount of filters and then reduces them (given how the filter number is generally increased with depth and reduction in spatial resolution), the authors also propose using the same number of filters per layer (\\\"uniform\\\"), as well as \\\"quadratic\\\" and \\\"negative quadratic\\\" distributions. There does not seem to be any particular motivation for these patterns, but this is at least better than poor justifications.\\n\\nThe results include 4 common CNN architectures and experiments on CIFAR-10 and CIFAR-100, which is acceptable, but only single results are shown, which makes it hard to judge the significance of the performance of the redistributed networks, especially given that there seems to be no particular trend in the performance of any of the templates. The accuracy vs. parameter count results are interesting, as the default architecture does worse a good amount of the time. Unfortunately memory footprint results are mixed, and the default architectures tend to perform much better with respect to inference time. Most of the figures are hard to read and not labelled completely (e.g. relying on the caption to dictate which column corresponds to which architecture), so these should be reworked.\\n\\nUltimately, the results are largely empirical, and the work would benefit from a better exploration for the consequences of using these templates, and any sort of rule or correlation that links these to successful architectures - at the moment it is only a somewhat interesting observation. Combined with the lack of multiple runs to provide more solid evidence for the empirical findings, I would reject this work.\"}"
]
} |
HkxZigSYwS | Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence | [
"Howard Heaton",
"Xiaohan Chen",
"Zhangyang Wang",
"Wotao Yin"
] | Many applications require quickly and repeatedly solving a certain type of optimization problem, each time with new (but similar) data. However, state of the art general-purpose optimization methods may converge too slowly for real-time use. This shortcoming is addressed by “learning to optimize” (L2O) schemes, which construct neural networks from parameterized forms of the update operations of general-purpose methods. Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data. This results in task-specific algorithms (e.g., LISTA, ALISTA, and D-LADMM) that can converge order(s) of magnitude faster than general-purpose counterparts. We provide the first general L2O convergence theory by wrapping all L2O schemes for convex optimization within a single framework. Existing L2O schemes form special cases, and we give a practical guide for applying our L2O framework to other problems. Using safeguarding, our theory proves, as the number of network layers increases, the distance between inferences and the solution set goes to zero, i.e., each cluster point is a solution. Our numerical examples demonstrate the efficacy of our approach for both existing and new L2O methods. | [
"L2O",
"learn to optimize",
"fixed point",
"machine learning",
"neural network",
"ADMM",
"LADMM",
"ALISTA",
"D-LADMM"
] | Reject | https://openreview.net/pdf?id=HkxZigSYwS | https://openreview.net/forum?id=HkxZigSYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AbMzpa7823",
"rygYsus3jB",
"BJenzHsnoB",
"SJxz1go2iB",
"HJldWinqsB",
"rJeTAw1ciS",
"Skg73b6OoB",
"S1e_EmxLjH",
"B1l7OLzF5B",
"SylIK-A0FS",
"H1xdZV-RKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750506,
1573857440934,
1573856532392,
1573855194223,
1573731071746,
1573677013444,
1573601706991,
1573417775828,
1572574826642,
1571901821753,
1571849215576
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2499/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2499/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2499/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2499/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2499/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2499/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper gave a general L2O convergence theory called Learned Safeguarded KM (LSKM). The reviewers found flaws both in theory and in experiments. While all the reviewers have read the authors' rebuttal and gave detailed replies, they all agree to reject this paper. I agree also.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 2 Rebuttal Feedback\", \"comment\": \"Thanks again for your feedback. We address each comment in turn.\\n\\n1) We acknowledge your point and will update the plots to use mean relative error, as suggested. \\n\\n2) Please see our recently posted response to Reviewer 2. In short, \\\"This is a THEORY paper whose contributions justify themselves mathematically.\\\"\\n\\n3) We believe the proposed framework is of practical relevance, particularly for its strong guarantee that extreme outliers will not occur (again see our response to Reviewer 2). It is NOT our aim to improve the performance of an L2O method in the average case. Rather, we seek to allow the L2O method to perform in its desired manner, except for situations where it diverges (e.g., as illustrated in Figure 1b).\\n\\n4) Indeed, our main contribution is algorithmic and our experiments do support our theoretical intuitions. Your comment \\\" However, it seems that you have already implemented everything needed to expand the experimental results...\\\" should be considered praise rather than a shortcoming. Indeed, it is our aim to make these algorithmic results easily accessible and able to be applied by practitioners.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your constructive feedback and insightful comments concerning the connection between our work and other existing papers. We believe our responses below address your concerns and hope, upon reading these, you will increase your score.\", \"a_main_point_to_start_with\": \"it is a theory paper, and perhaps the first set of convergence/robustness theories ever offered for the important field of learning-to-optimize. It is NOT \\u201cyet another\\u201d application-driven work that turns algorithms into deep architectures (which we are also very familiar with).\\n\\n1) To be brief: \\n\\n1.a) \\u201cThe idea of reimplementing an iterative algorithm in a deep architecture is not new\\u201d - we agree; that is not the point of our paper either.\\n\\n1.b) The key point of this paper is NOT just safeguarding; but rather, safeguarding applied to learning-to-optimize for the first time, and a unified convergence theory for (convex) learning-to-optimize, also for the first time. Neither of the mentioned safeguarding papers, nor any other literature we\\u2019re aware of for general-purpose safeguarding, is related to learned optimizers.\\n\\nSummarizing 1.a and 1.b, we respectfully disagree that the above comments shall be counted as weakness against our work.\", \"more_detailed_explanation\": \"We agree that providing experiments on real-world data (rather than synthetic) would provide a wonderful illustration. However, we disagree with the statement that synthetic results are insufficient to obtain our goal. To clarify, our work set out to identify a framework in which many L2O schemes may be incorporated into to provide theoretical guarantees of their behavior. We did not present any special L2O scheme to compete with state-of-the-art learn to optimize works. Indeed, our first two examples use the L2O schemes from existing works (one of which was published in ICLR) to illustrate how they can be incorporated and what their resulting behavioral differences are.\\n\\nIn situations like MRI (as your reference [3]), being able to ensure that the network does not diverge drastically would be quite important in a clinical setting (which appears to be possible given the literature on fooling neural networks). It would also be bad to have a patient\\u2019s MRI come out with artifacts from the network that were unfamiliar to a doctor and subsequently misinterpreted as something malignant. However, in [3] no theoretical guarantees are provided, which implies that such situations are possible. In contrast, if the Deep ADMM-Net were used within our framework, then the outlier cases could potentially be prevented or, at the least, identified by having a flag output from the network if too many safeguarding activations occur. \\n\\nWe respectfully argue that this paper\\u2019s main value is not discounted, even though there were few \\u201creal data\\u201d experiments.\\n\\n3) We did make some typos. Those will be updated appropriately.\"}",
"{\"title\": \"Rebuttal Feedback Response\", \"comment\": \"Thank you for your timely reply. We again believe the main discrepancies arise from fundamental misinterpretations our paper. We respond to each of your 5 points in the number they were given.\\n\\n1. We disagree. First of all, just to clarify, an L2O operator does not own $\\\\mu_k$. The sequence $\\\\{mu_k\\\\}$ is a part of our Algorithm 2, LSKM, which \\u201cwraps around\\u201d a given L2O operator. Secondly, we reassure that Theorem 3.1 itself is technically correct. \\n\\nNow, to address your question, Theorem 3.1 proves convergence under Assumptions 1-3, not under the assumption $\\\\mu_k\\\\rightarrow 0$. Theorem 3.1 covers the case: $\\\\mu_k$ does NOT converge to 0. To see this, notice that, by the \\u201cif \\u2026 then ... else ...\\u201d on Line 6 of Algorithm 1 and Assumption 3, the case that \\u201cmu_k does NOT converge to 0\\u201d can occur ONLY when the L2O operator is applied finitely many times, hence the classic operator on Line 9 of Algorithm 1 (which comes with convergence guarantees) is applied infinitely many times and makes $\\\\{x^k\\\\}$ converge.\\n\\nFor example, consider the identity operator as a dummy L2O operator. This is a bad L2O operator. It will fail the \\u201cif\\u201d condition Line 6, thus causing Line 9 (the classic operator) to run infinitely many times and converge. \\n\\nTherefore, Theorem 3.1 covers the case in question.\\n\\n2. To your questions, no claim can be made here in general. The number of times the safeguarding would be activated depends on the parameters chosen for $\\\\mu_k$. In particular, even if the L2O algorithm converges by itself, the safeguarding condition still fails either finitely many or infinitely many times, since the condition asks for sufficient progress (a point realized by the choices in Table 1). After all, convergence is an asymptotic concept, and a convergence rate can be arbitrarily slow. \\n \\n3) Safeguarding is used to guarantee convergence with any input L2O algorithm. It is a general convergence theory of the safeguarding procedures. This may be used for any L2O algorithm, including those created solely through heuristics.\\n\\n4) Please see Figure 1b. With an appropriate safeguarding choice, LSKM is equally good or better than an unsafeguarded L2O.\\n\\n5) \\u201cNo guard\\u201d is used to mean that $\\\\mu_k = \\\\infty$ is used the in the LSKM algorithm, i.e., the original $T_{L2O}$ update operation is always used at each step.\"}",
"{\"title\": \"Rebuttal feedback\", \"comment\": \"Thank you for your point-to-point answer. I have some comments and questions as follow:\", \"re\": \"The corresponding L2O algorithm may not necessarily have any theoretical guarantees of convergence, let alone a convergence rate.\\n\\n4. In Section~5.2, dose the L2O scheme perform better than the LSKM method?\\n\\n5. There is a 'No guard' method in Table~3, which is the LSKM method without safeguarding. What is the difference between the 'No guard' method and the L2O scheme?\"}",
"{\"title\": \"Rebuttal feedback\", \"comment\": [\"Thanks for the update folks. I can see some improvements and you've addressed some of my concerns. Some comments:\", \"As I mentioned in my review, the best would be to use the mean relative error: E_{d~D} [(f_d(x)-f_d^*) / f_d^*], rather than just the numerator (absolute error) as you've done in the revised paper. Using the mean relative error allows you to fix the y-axis range so that one can interpret performance on seen vs unseen. The mean relative error is easy to interpret in seen vs unseen, whereas mean absolute error is not, as you've correctly explained: \\\"because the underlying distributions in 1a and 1b are different, the resulting average objective function values will be different\\\". The reader should be able to understand how much worse the algorithm is on unseen data compared to what it was trained on.\", \"I agree with Reviewer #2 on the breadth of the experiments: I recommend comparing against some other methods and on more datasets. I don't see a reply to Reviewer #2's review.\"], \"both_of_these_comments_are_trying_to_get_an_answer_to_the_following_question\": \"is the proposed framework of practical relevance? Or is it a comprehensive theoretical framework of little use?\\n\\nOne could argue that the paper's main contribution is theoretical/algorithmic and the experiments serve only to confirm the intuition behind the theory and so they need not be super extensive. However, it seems that you have already implemented everything needed to expand the experimental results to other learning-based algorithms and datasets. That being said, I don't expect to modify my Rating at the moment.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your careful review and comments. To our best understanding, a few comments seem to arise from misinterpreting our paper\\u2019s content: we apologize if our manuscript has not been more clear and might have caused those confusions. Respectfully, we feel we have to disagree with the statement that \\u201cthis paper should be rejected because it does not properly answer the problem it is trying to address\\u201d.\\n\\nWe reply to your remarks below in the order that they were given. We sincerely hope they will clarify your concerns and convince you to increase the score.\", \"your_comment\": \"\\u201c(2) Theorem~3.1 is only related to the safeguarding procedure and the convergence of T. If we replace T_{L2O} by other operators, Theorem~3.1 still holds. In my view, this work provides a practical technique to guarantee the convergence of L2O algorithms rather than a general L2O convergence theory.\\u201d\", \"answer\": \"True, but in our opinion, it is a blessing, not a curse. To see this, it is important to notice that, while all L2O methods aim for fast optimization, their operators T_{L2O} are very diverse, having many different forms and properties. A safeguard, therefore, must be robust. It is precisely our intention to make Theorem 3.1 to fit any operator. Thus, the dependence of Theorem 3.1 only upon T and \\\\mu_k reveals its generality rather than its limitations.\\n\\nTheorem 3.1 is especially suitable for L2O operators, whose iterates are often non-monotonic. Therefore, our safeguard was especially designed to be robust to this behavior. When L2O works well on a problem, the safeguard will not intervene unnecessarily.\\n\\nWe agree with you that this work provides a practical technique/framework for guaranteeing convergence of L2O algorithms. And, an equally general result about general L2O convergence (without safeguarding) may not possibly exist since any such result would be highly dependent upon the distribution of data and be necessarily prescribed in a probabilistic manner (thus unable to well-handle outliers, which may be of utmost importance in medical applications). Thus, we wish to reemphasize the point that safeguarding gives a certain guarantee so one can safely apply L2O to data that are possibly unseen.\", \"below_is_an_itemized_response_to_your_other_comments\": [\"Noted. We have removed all material related to firm nonexpansiveness as the essential property we use later in the paper is averagedness, which is more general.\", \"The corresponding L2O algorithm may not necessarily have any theoretical guarantees of convergence, let alone a convergence rate.\", \"-This is a fair point. However, we believe our synthetic data gives greater control over the \\u201cseen\\u201d and \\u201cunseen\\u201d distributions. Please see 2) in our response to Reviewer 2 below (that will be posted soon).\"]}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We appreciate your thoughtful and thorough remarks, and for appreciating our work\\u2019s merits. Each concern you listed is clarified in our revised submission and point-by-point responses are provided below. We believe our responses below address your concerns and hope, upon reading these, you will increase your score.\\n\\nFirst we address the experimental evaluation remarks.\\n\\n1) We have updated the plots to instead use the difference in expected error, i.e., \\\\E[ f_d(x^K) - f*_d ]. In the references you listed, we found various values used for the y-axis (including loss value in several cases). In addition to resolving the identified issue, this should be easier to interpret than a relative error (since no extra explanation is needed and it is clear 0 is the optimal plot value). You were correct that we blundered by inaccurately using relative error, and we are happy to fix this. Thanks for pointing that out.\\n\\n2) You are correct that the \\u201crelative error\\u201d that we obtained was large, particularly in Figure 3. This is simply because several thousand iterations are required to obtain convergence by the KM method and we limited the example layers to the accuracy of about one thousand iterations of the KM method.\\n\\n3) The reason for the different number of iterations shown in the plots for \\u201cseen\\u201d versus \\u201cunseen\\u201d distributions is that the emphasis of the plots is quite different. In the case of the seen distribution, the goal is to illustrate how the LSKM method compares to the reference KM method (point (i) at the top of page 7). However, in the case of the unseen distribution, the goal is to show the different behaviors of the safeguarded and unsafeguarded methods (point (ii) at the top of page 7), as they will clearly start to diverge in the very early iterations, To humor curious readers about overall convergence in the unseen case, we have updated the x-axis of plots to be log-scale and increased the number of iterations shown.\\n\\n4) Please note that it would not be comparing apples to apples if one were to fix the y-axis scale and compare between plots. The objective functions are defined in terms of the data d. Thus, even for the same experiment (e.g., ALISTA in Figure 1), because the underlying distributions in 1a and 1b are different, the resulting average objective function values will be different. We believe it is only meaningful to compare curves within the same plot. The second paragraph in Section 5 has been revised accordingly.\", \"methodology\": \"Our method does NOT \\u201crequire\\u201d learning per-iteration parameters. In our experiments we chose to use layer-dependent weights since we believe this yields superior performance for a fixed number of iterations in our experiments. To clarify this matter, we have revised the wording in Section 4 to include a set C specifying the network structure and added Remark 4.2 on page 6 where we note one could use layer-independent weights. Such a situation would still be covered by Theorem 3.1 since the theorem\\u2019s claim is dependent only on T and \\\\mu_k, not T_{L2O}.\", \"related_work\": \"Thank you for pointing out these related works. We initially focused our references on those who learn solve convex optimization problems. We agree that we should have included a more comprehensive discussion of the general learn-to-optimize literature, and appreciate the list of papers that you kindly point out. Following your suggestion, we have added further discussion in the \\u201cRelated Works\\u201d section at the beginning of the paper. We plan to add more detailed discussions of those related work after we revise the paper further to save more space.\", \"clarifying_questions\": \"We mean well-defined in the usual sense where a function f(x) is well-defined if to each input x there is a unique identifiable output f(x). So, taking f(x) = T_{L2O}(x, \\\\zeta), we assume f(x) is well-defined. Also, each averaged operator is nonexpansive, and so this was indirectly addressed in the same sentence of our submission as where your question is drawn from. However, your comment reveals our lack of clarity, which we have now resolved by replacing \\u201cnonexpansive\\u201d with \\u201caveraged\\u201d on line 5 of page 4. We have also removed each instance of \\u201cfirmly nonexpansive\\u201d from the paper and used \\u201caveraged\\u201d where appropriate to further make things clear.\", \"minor\": [\"Fixed.\", \"Noted. This requested change has been made.\"], \"additional_revisions\": [\"We updated Section 5.3 since there were errors in (24) and the following paragraph of our initial submission.\", \"Table 1 was revised to add 3 methods.\", \"The first paragraph in Section 6 has a few minor revisions to reduce length (so that our paper complies with the space limit).\", \"Figure 3 has been moved to the Appendix to comply with the space limit.\", \"The word \\u201cBecause\\u201d was removed from the first paragraph in the proof of Theorem 3.1.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper is trying to provide a general learning-to-optimize(L2O) convergence theory. It proposes a general framework, the Learned Safeguarded KM(LSKM) method, and proves the convergence of the algorithms generated by this method under certain conditions. Both the theoretical results and the experimental findings have been presented.\\n\\nThis paper should be rejected because it does not properly answer the problem it is trying to address. (1) The LSKM method with any \\\\mu_k is the universal method and it encompasses all L2O algorithms when the safeguarding condition \\\\|S(y^k)\\\\|<= (1-\\\\delta) \\\\mu_k always holds. However Assumption~3 cannot cover the cases that the safeguarding condition always holds. Thus Theorem 3.1 gives the convergence of some algorithms generated by the LSKM method rather than the convergence of L2O schemes. (2) Theorem~3.1 is only related to the safeguarding procedure and the convergence of T. If we replace T_{L2O} by other operators, Theorem~3.1 still holds. In my view, this work provides a practical technique to guarantee the convergence of L2O algorithms rather than a general L2O convergence theory.\", \"also_i_have_some_comments_as_follow\": \"1. Section~2 provides an overview of the fixed point method. However only a few definitions and notations in this section is helpful to understand the proposed method. Please shorten this part.\\n2. Dose the safeguarding procedure guarantee the convergence of the LSKM method and decrease the convergence rate comparing to the corresponding L2O algorithm? Please explain more about the role of the safeguarding procedure.\\n3. It would be better to have a real data example in Section~5.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a unified framework for parametrizing provably convergent algorithms and learning the parameters for a training dataset of problem instances of interest. The learned algorithm can then be used on unseen problems. One key idea to this algorithm is that it is safeguarded, meaning it will perform some standard, non-learned iterations, if the predicted iterate is not good enough under some condition.\", \"there_are_three_main_features_of_the_proposed_approach\": \"1- It unifies various previous approaches such as LISTA, ADMM, non-negative least squares, etc. By defining some operators and safeguarding rules, the same learning approach can be leveraged for these different optimization problems.\\n2- It is shown that the learned algorithms are provably convergent under some mild assumptions.\\n3- Empirically, it is shown that the learned algorithms converge faster than the non-learned counterpart on sparse coding, ADMM and non-negative least squares; they use safeguarding sparingly, particularly when used to solve test instances from the same distribution as the training instances.\\n\\nAdditionally, the paper is very well-written. I did not verify the proofs in detail but they seemed OK at a high-level; however, I am not an expert in convex optimization so I hope other reviewers will be able to comment on this aspect.\\n\\nI do have some deep concerns about the evaluation metrics used to report the results that I will discuss next; these are the main reason for my current score, but I am willing to adjust it if the authors address them convincingly. I also have some comments about related work.\", \"experimental_evaluation\": [\"The error metric (15) is not suitable for evaluating the performance of an optimization algorithm. You should compute the expectation of the relative error, i.e. E_{d~D} [(f_d(x)-f_d^*) / f_d^*]. This is similar to the average approximation ratio used in the learning to optimize papers for discrete problems (see refs. below). (15) is just the ratio of the expected absolute error to the expected optimal value; I don't think that is equivalent to what I suggested.\", \"The relative error values are massive in some cases, e.g. Fig. 3. What's going on there? Are all methods performing that horribly? Am I misinterpreting the metric?\", \"Why do the plots for the seen distribution extend over thousands of iterations but only for tens of iterations for the unseen distribution?\", \"Please use the same scale for the y-axes in Figs. 1-3.\"], \"methodology\": [\"Your method requires learning per-iteration parameters. The other L2O methods for gradient descent (see refs. below) use shared parameters instead. This allows them to run for many iterations, possibly beyond what they were trained for. Your method does not allow for that. On the other hand, such models are recurrent and thus possibly more difficult to train than your unrolled feedforward model. Is the fixed number of iterations a limitation of your method? Please discuss this.\"], \"related_work\": \"- Learning for gradient descent: I am surprised these papers are not mentioned although they are quite relevant. They are rather recurrent networks with shared parameters across iterations, but you should also compare against them both conceptually and experimentally:\\n\\n\\\"Learning to optimize.\\\" arXiv preprint arXiv:1606.01885 (2016).\\n\\\"Learning to learn by gradient descent by gradient descent.\\\" Advances in neural information processing systems. 2016.\\n\\n- Learning to optimize in the discrete setting: there is lots of recent work on this that you should at least point to in passing, e.g.:\\n\\n\\\"Learning combinatorial optimization algorithms over graphs.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\\"Combinatorial optimization with graph convolutional networks and guided tree search.\\\" Advances in Neural Information Processing Systems. 2018.\\n(Survey) \\\"Machine Learning for Combinatorial Optimization: a Methodological Tour d'Horizon.\\\" arXiv preprint arXiv:1811.06128 (2018).\\n\\n- Theory for learning to optimize: Since you have a theoretical basis for your framework, you should discuss connections to other recent frameworks such as the one below by Balcan et al. It is geared towards the discrete setting and sample complexity rather than convergence, but you should nevertheless discuss it.\\nBalcan, Maria-Florina, et al. \\\"How much data is sufficient to learn high-performing algorithms?.\\\" arXiv preprint arXiv:1908.02894 (2019).\", \"clarification_questions\": [\"\\\"The choice of parameter \\u03b6 k in Line 3 may be any value that results in a well-de\\ufb01ned operator T L2O\\\": what is \\\"well-defined\\\" here? that T_{L20} is averaged?\"], \"minor\": [\"Page 3: \\\"A classic theorem states sequences\\\" -> \\\"A classic theorem states that sequences\\\"\", \"Appendix proofs: please organize into sections and restate the statements before the proofs.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a framework to unfold the safeguarded Krasnosel\\u2019ski\\u02d8\\u0131-Mann (SKM) method for the learn to optimization (L2O) schemes. First, SKM is proposed in Algorithm 1 with convergence guarantee established in Theorem 3.1 and Corollary 3.1. Then, SKM is unfolded and executed with a neural network summarized in Algorithm 2. Experiments on the Lasso and nonnegative least squares show the efficiency of the proposed method as well as the effectiveness of safeguarding compared to traditional L2O methods.\", \"advantages\": \"1. A general framework that encompasses all L2O algorithms for use by practitioners on any convex optimization problem.\\n2. It seems that the convergence analysis of Krasnosel\\u2019ski\\u02d8\\u0131-Mann equipped with safegarding is established for the first time.\", \"weakness\": \"The idea of reimplementing an iterative algorithm in a deep architecture is not new, and the combination of safegarding with KM has already been analyzed [1,2]. Moreover, the experiments are not convincing. \\n1. Safegarding is the key point of this paper, but the authors did not review related works on safegarding. Please show the relationships of SKM with prior works and comment on the novelty of the analysis in this paper. \\n[1] Themelis, Andreas, and Panagiotis Patrinos. \\\"SuperMann: a superlinearly convergent algorithm for finding fixed points of nonexpansive operators.\\\" IEEE Transactions on Automatic Control (2019).\\n[2] Sopasakis, Pantelis, et al. \\\"A primal-dual line search method and applications in image processing.\\\" 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017.\\n\\n2. All the 3 experiments are conducted on synthetic datasets which is not convincing enough to show the efficiency and effectiveness of LSKM. It is suggested to carry out experiments on real-world datasets like [3,4] with state-of-the-art methods. \\n[3] Sun, Jian, Huibin Li, and Zongben Xu. \\\"Deep ADMM-Net for compressive sensing MRI.\\\" Advances in neural information processing systems. 2016.\\n[4] Metzler, Chris, Ali Mousavi, and Richard Baraniuk. \\\"Learned D-AMP: Principled neural network based compressive image recovery.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n3. The are too many errors in references, for examples:\\n(3.1) What is \\\"In S. Bengio, H.Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31\\\"? This error appears multiple times. \\n(3.2) Show complete information of reference \\\"Liu et al. (2019a)\\\".\"}"
]
} |
Bye-sxHFwB | A Gradient-Based Approach to Neural Networks Structure Learning | [
"Amir Ali Moinfar",
"Amirkeivan Mohtashami",
"Mahdieh Soleymani",
"Ali Sharifi-Zarchi"
] | Designing the architecture of deep neural networks (DNNs) requires human expertise and is a cumbersome task. One approach to automatize this task has been considering DNN architecture parameters such as the number of layers, the number of neurons per layer, or the activation function of each layer as hyper-parameters, and using an external method for optimizing it. Here we propose a novel neural network model, called Farfalle Neural Network, in which important architecture features such as the number of neurons in each layer and the wiring among the neurons are automatically learned during the training process. We show that the proposed model can replace a stack of dense layers, which is used as a part of many DNN architectures. It can achieve higher accuracy using significantly fewer parameters. | [
"number",
"neurons",
"layer",
"neural networks structure",
"architecture",
"deep neural networks",
"dnns",
"human expertise",
"cumbersome task",
"task"
] | Reject | https://openreview.net/pdf?id=Bye-sxHFwB | https://openreview.net/forum?id=Bye-sxHFwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"j4U28uFNXr",
"rJx_NPdojB",
"BJlJTjp9sH",
"SJev98McsB",
"rylhQa4CKB",
"HklEgucTtH",
"rJgEXQv6tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750477,
1573779248357,
1573735350924,
1573688974927,
1571863844205,
1571821548479,
1571808028370
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2498/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2498/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2498/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2498/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a neural network architecture that represents each neuron with input and output embeddings. Experiments on CIFAR show that the proposed method outperforms baseline models with a fully connected layer.\\n\\nI like the main idea of the paper. However, I agree with R1 and R2 that experiments presented in the paper are not enough to convince readers of the benefit of the proposed method. In particular, I would like to see a more comprehensive set of results across a suite of datasets. It would be even better, although not necessary, if the authors apply this method on top of different base architectures in multiple domains. At the very least, the authors should run an experiment to compare the proposed approach with a feed forward network on a simple/toy classification dataset. I understand that these experiments require a lot of computational resources. The authors do not need to reach SotA, but they do need to provide more empirical evidence that the method is useful in practice.\\n\\nI also would like to see more discussions with regards to the computational cost of the proposed method. How much slower/faster is training/inference compared to a fully connected network?\\n\\nThe writing of the paper can also be improved. There are many a few typos throughout the paper, even in the abstract. \\n\\nI recommend rejecting this paper for ICLR, but would encourage the authors to polish it and run a few more suggested experiments to strengthen the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you very much for reviewing our paper and proposing valuable comments.\\n\\nWe had called our approach structure learning to emphasize the fact that our model does not need a hand-crafted structure. Still, to avoid any misunderstanding we have updated the title in the new revision of our paper. \\n\\nWe have addressed your comments below.\\n\\n1. Your point about the representational power of constructed FNN in Theorem 1 is correct (except for a minor issue that the number of parameters in the multi-layer version is O(Nd) not O(Nd/l)).\\n\\t(I) However, note that this is merely an upper-bound. The main purpose of this theorem is to prove the ability of FNNs to replace the multi-layer version.\\n\\t(II) Additionally, we show that it is possible to use FNNs in practice with a reasonable embedding space dimension.\\n\\t(III) It is worth mentioning that the benefit of using the recurrent version is not a reduction in the number of parameters. Instead, by using the recurrent version, the dimension of (discrete) search space for the number of nodes in different layers reduces from l to 1. This reduction facilitates self-configuration since searching in the continuous space of parameters is much easier than search in the discrete space of such hyper-parameters.\\n\\t(IV) Regarding your proposal, we would like to first thank you for sharing this idea. Unfortunately, we are not aware of any task where a shallow-and-wide network would perform better than a deep network. Hence we could not find two datasets that satisfy similar conditions as you described. We would appreciate any suggestions you might have in this regard. Furthermore, we would like to clarify that we do not argue that our model can learn a specific structure, such as a deep network, but that it is able to find an internal structure for each task. \\n\\n2. \\n\\t(I) In the new revision of our paper, we replaced our previous baseline with the top-performing model found when searching amongst fully connected networks with up to 4 layers using HyperOpt. We also presented results for a smaller baseline model with approximately the same number of parameters as our FNN, as you suggested. \\n\\t(II) Note that testing several embedding sizes and tuning the number of iterations does not come computationally even close to a thorough search of possible fully connected architectures (that includes not only optimizing the number of layers but also the number of hidden neurons in each layer). It is worth mentioning that we did not perform task-specific tuning of FNNs to obtain the reported results. So to obtain the latest results, we have spent more computational resources to tune the baseline than we did for our model.\\n\\t(III) To choose the embedding size, we tested a limited set of embedding sizes (128, 256, 512) on a validation subset sampled from training data of MNIST. The best embedding size (256) was chosen for all the tasks without separate tuning for each task. We reported this value in the revised version of our paper accordingly. \\nThe number of iterations in the first task (comparing with FC networks) was the same as the number of hidden layers in the fully connected network being compared with our model. When integrating FNNs with VGG we also tested an FNN with 3 iterations and chose it over the FNN with 1 iteration. No additional tuning, such as testing other values for number of iterations, was done.\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you very much for reviewing our paper and proposing valuable comments.\\n\\nWe have responded to your comments below.\\n\\n1. \\n\\t(I) To alleviate your concern regarding the baseline, we used HyperOpt to search fully connected architectures with up to 4 hidden layers. In the new revision of our paper, we replaced our baseline with the top-performing model found in the search. We also provided results for a smaller baseline with the same number of parameters as in our models.\\n\\t(II) The rationale behind matching networks based on the number of neurons was converting a fully connected network to an FNN in two steps. Initially, the fully connected networks are converted to a multi-layer floating network. The reduction in the number of parameters occurs in this step. The multi-layer floating network is then replaced with an FNN to allow self-configuration. However, we understand why the initial results would not be convincing. In the new revision, we added the performance results of a multi-layer floating network with the same structure as the baseline. Additionally, we reduced our claim of parameter reduction according to the new results since we now understand that our previous comparison might have been unfair.\\n\\n2. As you suggested, we have added an ablation study in Appendix A, showing how each parameter affects the model's performance. Specifically, the results show that all of the three aspects mentioned are important to obtain peak performance.\\n\\n3. The constructed FNN in Theorem 1 is provided merely to establish an upper-bound, and its only purpose is to prove the ability of FNNs to replace the multi-layer version. This ability is why we referred to them as more general. The FNNs are superior because they do not require an assignment of neurons to layers. As you mentioned, the superiority stems from the additional parameter sharing. Also, even though we do not theoretically prove that it is possible to do this efficiently, we show that it is possible to use FNNs in practice with a reasonable embedding space dimension.\\n\\n4. We apologize for the mentioned misprints. We have corrected such mistakes in the new revision.\\n\\n5. You are correct that part of our reason for choosing VGG is because it uses a larger FC layer at the end. We have added the results for using FNN instead of the FC layer at the tail of ResNet18 on CIFAR100 in Appendix B. However, in ResNet, most of the work is being done with CNNs, and the FC layer is minimal. As a result, the improvement when using our model is limited. We agree that the improvement might be increased when running on datasets with an even larger number of classes, such as ImageNet. However, training a model on ImageNet in such limited time requires a lot of computational resources, which, unfortunately, we do not have. \\nWe would like to emphasize that our goal in this paper is to propose a self-configuring structure of neurons which can replace FC networks. It is not our intention to improve existing models in a specific field, such as computer vision. We used CIFAR and MNIST merely because they are well-known. Furthermore, we experiment with CNNs only to show the integrability of our model with existing layers.\"}",
"{\"title\": \"Response to review\", \"comment\": \"Thank you very much for reviewing our paper and proposing valuable comments.\\n\\nWe are delighted that you are interested in our main idea and see the possibility of follow-up works. \\n\\nHowever, we think that one of the main contributions of our work is missing from your summary and review. As you pointed out correctly, in Section 3.2, we proposed \\u201cfloating neural networks\\u201d, which assigns embedding vectors to neurons and employs the attention mechanism to replace FC layers. However, please note that in Section 3.3, we proposed \\u201cFarfalle neural networks\\u201d, which has a recurrent structure. In the recurrent version, data passes through all hidden floating neurons several times, facilitating self-configuration. Furthermore, in Theorem 1, we stated that this recurrent structure can model any FC layer without the need for a search in the vast space of FC architectures.\\n\\nWe have responded to your comments below.\\n\\n1. We think there is a misunderstanding regarding the FNN proposed in section 3.3. Section 3.3 is dedicated to describing a recurrent version in contrast with the flat version described in Section 3.2. In the recurrent version, data passes through all neurons several times, whereas, in the flat version, data is passed through each neuron just once. That is why \\u201citeration\\u201d is used rather than \\u201clayer\\u201d. It is worth mentioning that the flat version of our proposed method (introduced in Section 3.2) needs manual assignment of neurons to layers, while the proposed model in Section 3.3 allows self-configuration. \\n\\n2. Regarding datasets and empirical performance.\\n(I) The goal of this paper is to propose a self-configuring structure of neurons which can replace FC networks. It is not our intention to improve existing models in a specific field, such as computer vision. We used CIFAR and MNIST merely because they are well-known. Furthermore, we experiment with CNNs only to show the integrability of our model with existing layers.\\n(II) We appreciate your suggestion for experiments with Transformers. Unfortunately, training on a large scale task such as WMT translation in a limited time requires a lot of computational resources that we do not have. We would like to note that FFNs in Transformers are small and simple nets with no major impact on the main idea of these networks.\\n\\nQuestion 1. We apologize for not reporting the embedding size, d, in the initial version. We have fixed this issue in the revised version.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new architecture based on attention model to replace the fully-connected layers. In this architecture, each neuron is associated with an embedding vector, based on which the attention scores (between two consecutive layers) are calculated, and the computational flow through the layers are derived based on these attention scores. The experiments on MNIST and CIFAR demonstrate some degree of superiority over plan FC layers.\", \"pros\": \"1. The idea is indeed interesting and AFAIK, there is no prior works trying to derive embedding for each neuron. The embedding based connection might encourage other follow-up works.\", \"cons\": \"1. The writing sometimes seems unnecessarily complicated. For example, the \\u201citeration\\u201d in section 3.3 is actually \\u201clayer\\u201d, right? I furthermore see no motivation of listing the four items in this section, even the whole section 3.3: they are just re-stating the feedforward process of FNN. \\n2. I donot believe FC is essential in modern computer vision (CV) tasks, so the better performance over a plain FFN on CV tasks are not that convincing (especially the two datasets are typically regarded as debugging dataset nowadays). I suggest the authors conduct more experiments on Transformer based tasks (e.g., machine translation), since in Transformer, the FFN is quite important. If the replace of FFN using the proposed FNN is successful for Transformer on some large scale task (e.g., WMT14 En-De Translation), this work will be much stronger in terms of empirical performance.\", \"question\": \"1. What is the embedding size d in the experiments? If d is large, the complexity comparison in the last paragraph of section 3.2 will not make too much sense.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Overall the paper is easy to read and I welcome that.\\n\\nI like the idea of using node-level embedding instead of pairwise weights to learn a low-rank weight representation. However, I am more skeptical about using this in a recurrent architecture and claiming that this is structure learning. The empirical results do not provide sufficient evidence that this performs structure learning.\\n\\n1. Theorem 1 seems rather straightforward because the FNN has much more representational power in the sense that its number of parameters is O(Nld) whereas the multi-layer version has O(Nd/l) parameters (in the uniform-width case). \\n\\n -- A more interesting question when it comes to structure learning is this: Suppose the best architecture for task A is shallow-and-wide while for task B is deep-and-narrow, each requiring roughly the same number of parameters. Can I use the proposed FNN with a similar number of parameters to learn the corresponding architecture for A and B respectively, without the need to figure out which is which? There is no evidence, analytical nor empirical, in this work, that suggests that this is the case.\\n\\n2. Section 4. It would be interesting to try baselines that have roughly the same number of parameters as the proposed FNN. Also, the choice of d (embedding size) and the number of iterations can be viewed as making architectural decisions. How were they chosen? Assuming that the same amount of computational resource is spent on searching through baseline architectures as well, could the results have been different from those in Table 1?\\n\\nThere are interesting ideas in this work but in its present form I cannot yet recommend acceptance.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduces a new neural network architecture, in which all neurons (called \\\"floating neurons\\\") are essentially endowed with \\\"input\\\" and \\\"output\\\" embedding vectors, the product of which defines the weight of the connection between any two neurons. The authors discuss two network architectures employing floating neurons: (a) multi-layer floating neural networks and (b) farfalle neural network (FNN), in which there is one hidden layer, but additional recurrent connections are introduced between the hidden neurons.\\n\\nAs mentioned by the authors, the proposed architecture is similar to architectures employing low-rank weight matrix factorization. In my opinion, the main novelty lies in: (a) \\\"floating neuron\\\" interpretation, (b) additional weight matrix normalization, and (c) FNN architecture similar to that of a \\\"floating neuron\\\" RNN network with additional restrictions.\\n\\nI find the proposed idea to be promising and quite intriguing, but I think that the paper has some room for improvement and provided empirical evidence might be insufficient (including for understanding the importance of individual model components), which in turn makes the claims of potential practical attractiveness less justified. I will be happy to update the final score provided with more compelling arguments or empirical analysis of the proposed architecture.\", \"addressing_the_following_issues_might_greatly_improve_the_quality_of_the_paper\": \"1. In Section 4.1, the authors compare FNN and DNN on MNIST and CIFAR10 datasets. My concern is that the authors pick a seemingly arbitrary DNN architecture (just a single one) and restrict comparison to it. One issue is that ~50% accuracy on CIFAR10 can be easily demonstrated by a variety of 5-layer DNN architectures including those much smaller, with just ~600k parameters (!) and possibly even lower. This makes the 90% parameter reduction claim not particularly meaningful. And why were models matched based on the total number of neurons, but not, say, the total number of parameters, or other measures? I believe that these questions require additional discussion and empirical evidence. Just as an example, if it was possible to sample (potentially randomly) different DNN architectures (with a reasonable parameter prior) and compare them with FNNs on a 2D accuracy-parameters plot (or using other important metrics), it would provide much more information to the reader.\\n\\n2. Another important point that I would like to make is that there is much more that can be done to explore the hyper-parameter space of FNN to isolate which particular factors play a decisive role in its superior accuracy. The authors present us with a specific choice of the normalization function, and values of k and d, but it would be very informative to study how results change when different choices are considered. FNNs differ from DNNs in at least three aspects: usage of low-rank factorization, weight normalization and recurrent structure. How important are these individual aspects? Are some of them redundant, or almost redundant, or do FNNs require all of these components to achieve their peak performance? In other words, I believe that a careful ablation study would greatly improve this publication.\\n\\n3. As a minor note, I think that the statement that FNNs \\\"are more general\\\" than floating neural networks is only partially correct. If I am not mistaken, FNN can also be \\\"unrolled\\\" and represented as a multi-layer floating neural network with additional parameter sharing. Also, the computational complexity of the constructed FNN (in Theorem 1) appears to be significantly higher than that of the floating neural network (especially for high l). This would imply that FNNs do not necessarily supersede multi-layer floating neural networks, at least when the computational complexity is of importance.\\n\\n4. There are a few minor misprints throughout the text. For example, in \\\"0<j<=j\\\" in the proof of Theorem 1, or in \\\"R output floating neurons for the final deduction from hidden neuron\\\" (output should be S). Also, I could not find information about the value of d used in the described experiments (which I estimated to be 256; is this correct?).\\n\\n5. In Section 4.3, the authors propose to use FNNs for the final layers of conventional CNN architectures. The issue is that the VGG16 network chosen for experiments was probably picked because it uses several large fully-connected (FC) layers in its tail whereas all more recent and efficient CNN architectures actually gravitate towards a smaller single FC layer. It is possible that FNNs could still be used in FC layers of these modern networks as well (especially with a large number of classes). But additional empirical results for these architectures would, in my opinion, be much more convincing.\", \"updated\": \"The authors updated the text and addressed many of my questions. In my opinion, this improved the paper and made some of its claims much better justified. I change the rating to \\\"Weak Accept\\\".\"}"
]
} |
ByeWogStDS | Sub-policy Adaptation for Hierarchical Reinforcement Learning | [
"Alexander Li",
"Carlos Florensa",
"Ignasi Clavera",
"Pieter Abbeel"
] | Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available at sites.google.com/view/hippo-rl. | [
"Hierarchical Reinforcement Learning",
"Transfer",
"Skill Discovery"
] | Accept (Poster) | https://openreview.net/pdf?id=ByeWogStDS | https://openreview.net/forum?id=ByeWogStDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"tdYDQN9mJ",
"HyeJo7g3oH",
"ryggqqioir",
"Ske2AqmssH",
"SkejVVXjor",
"BylQQdyTYr",
"SkllWnFOKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798750449,
1573811095109,
1573792392428,
1573759700062,
1573758003371,
1571776538703,
1571490807747
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2497/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2497/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2497/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2497/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers hierarchical reinforcement learning, and specifically the case where the learning and use of lower-level skills should not be decoupled. To this end the paper proposes Hierarchical Proximal Policy Optimization (HiPPO) to jointly learn the different layers of the hierarchy. This is compared against other hierarchical RL schemes on several Mujoco domains.\\n\\nThe reviewers raised three main issues with this paper. The first concerns an excluded baseline, which was included in the rebuttal. The other issues involve the motivation for the paper (in that there exist other methods that try and learn different levels of hierarchy together) and justification for some design choices. These were addressed to some extent in the rebuttal, but I believe this to still be an interesting contribution to the literature, and should be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Further clarification of time-commitment\", \"comment\": \"1. We are glad the reviewer agrees with our statement that many of the recent end-to-end hierarchical methods (FuN, HIRO, etc.) are limited to goal-reaching problems. We agree with the reviewer that Option-Critic does not fall into this category, and we hope it's clear in our updated paper.\\n\\n2. HiPPO actually does have mechanisms to prevent skill collapse to atomic actions. First, with the random time-commitment, HiPPO does NOT train a termination function, so each skill must be coherent and useful for about 15 steps, which makes learning long-horizon strategies easier. HiPPO also learns at two timescales: a fine-grained timescale for training the skills, and a coarser timescale for training the manager. In contrast, Option-Critic learns both the q-function and the termination function at the fine-grained timescale, which means that it might not be easier to learn high-level decision-making. Another novel factor that helps HiPPO optimize at two different timescales is our introduction of different baselines for the manager and the skills. Furthermore, HiPPO\\u2019s use of PPO likelihood clipping prevents drastic changes in policy behavior. \\n\\nWe also agree that in the worst-case scenario Option-Critic becomes simple Actor-Critic. However, Actor-Critic (and flat policies in general) lack the benefits that a temporal hierarchy can confer in terms of effective and temporally correlated exploration. Poor results for PPO and Option-Critic on our environments demonstrate this failure case. We agree that skill-collapse is not always fatal: we have observed Option-Critic learning quite well on simpler environments, such as Point-Mass Navigation, Cartpole Balance, and Cartpole Swingup.\\n\\n3. We thank the reviewer for the additional details about their concern, and yes, we fully agree that the temporal hierarchy that we study is a specific one, not the most general case. We have corrected the wording in the paper to make this explicit. \\nNevertheless, we argue that a fixed time-commitment is quite common in the literature (HIRO, FuN, SNN4HRL, DADS, etc.). The impact of this design decision depends on the environment we use it in. In tasks where rapid, short-timescale control is important, our fixed or random time-commitment might be noticeably suboptimal. However, for the long-horizon problems we tackle, our adaptation strategy confers several benefits, as detailed in the second topic above. \\n\\nEmpirically, we see that our time-commitment strategy does better than allowing arbitrary switching between skills. Results on all four environments show that HiPPO outperforms Option-Critic in both learning from scratch and fine-tuning skills. Again, hyperparameter sensitivity plots in the Appendix show that HiPPO achieves high performance for a wide range of time-commitments. We do have a relevant ablation in Fig. 3: HiPPO p=1, which means that the manager chooses an active skill at every timestep. On the Block environments, where high-level decision making is not as important, HiPPO p=1 does on par with our algorithm. In the Gather environment, where it\\u2019s difficult to choose the optimal route for collecting the most apples over a horizon of 5000 - 8000, HiPPO p=1 performs worse. Finally, as discussed in the paper, the fact that the time-commitment strategy is not learned simplifies the gradient (since there\\u2019s no termination function) and acts as a regularizer for the robustness of the final skills. \\n\\nWe believe our proofs and strong empirical findings are of high interest to the research community. We find the reviewer\\u2019s responses extremely helpful, and hope that we have addressed all of the remaining concerns. Please let us know otherwise.\"}",
"{\"title\": \"Response to the author's clarification\", \"comment\": \"1. The authors wrote in the response that \\\"These approaches (option-critic, FuN, HIRO) might hinder the learning performance in tasks that cannot be described as goal-reaching.\\\" I guess the \\\"goal-reaching\\\" here means reaching a particular state or group of states. I agree with the authors that there are problems that are not goal-reaching. And I agree that FuN and HIRO are limited to goal-reaching problems (they are in fact, even more restricted as they assume the euclidian distance between states can be defined and would be good to use to define pseudo rewards). However, I think option-critic method and its related works are not limited to the goal-reaching case.\\n\\n2. In the response, the authors pointed out the option-critic suffers from collapsing skill problem and thus performs worse than the proposed HiPPO, as shown in figure 5, 6. However, I am doubtful about this point. 1) From both theoretical and empirical parts of the paper, I don't see any reason the proposed HiPPO is immune to the collapsing skill problem. In fact, I would guess it also suffers from the same problem as there is no mechanism in the algorithm preventing this from happening. 2) The collapsing skills problem doesn't necessarily lead to bad performance. With one skill, the option-critic degenerates to the actor-critic, still a decent RL algorithm. \\n\\n3. I notice that the authors made a change in section 4.1: \\\"In a temporal hierarchy, a hierarchical policy with a manager \\u03c0\\u03b8h (zt|st) selects every p time-steps one of n sub-policies to execute.\\\" I appreciate the authors to make this change, but unfortunately, this is not what I suggested. That \\\"a hierarchical policy with a manager \\u03c0\\u03b8h (zt|st) selects every p time-steps one of n sub-policies to execute\\\" is a specific temporal hierarchy. It is not the only possible temporal hierarchy even for \\\"long horizon locomotion+navigation tasks\\\" tackled in this paper. And this particular temporal hierarchy is less general than some other temporal hierarchies like the option framework. So why should we follow this particular way? For example, is that because that makes the algorithm design easier? What would be the consequence of this simplification? It would be great for me and other readers to understand if the authors could provide reasons for this key design choice, or refers to other works which make the same design choice and give reasonable justifications.\"}",
"{\"title\": \"Additional details and new benchmark (MLSH)\", \"comment\": [\"We thank the reviewer for their positive review, and for providing remarks and questions that have helped us further improve our work. Here is a summary of the modifications:\", \"We have updated the caption for Fig. 2. There are separate lidar signatures for bombs and apples, so agents can indeed distinguish between the two. This is now better explained in the text, and in Appendix B.\", \"We have updated Fig. 4 and 6 [previously Fig. 5]. The undesired behavior was due to uncomplete runs of that agent. Please let us know if any confusion remains.\", \"The website contains a link (\\u201cSee implementation here\\u201d) to an anonymized Github repo that contains code for our experiments.\", \"We have added comparisons to MLSH in Fig. 4 and Fig. 6. Our results show that the MLSH training scheme does not help it learn better from scratch or when fine-tuning pre-trained skills in these tasks.\", \"HiPPO with p=10 learns slightly better on Ant than HiPPO with random period. However, Table 1 shows that the percent change in performance is better for randomized period, leading HiPPO to outperform its fixed p=10 counterpart in 6/8 overall scenarios.\", \"We hope we have addressed all of your concerns, and if any remain please let us know.\"]}",
"{\"title\": \"Clarifying motivation, impact, and design choices.\", \"comment\": \"We thank the reviewer #2 for their detailed comments - they have helped improve our exposition of the motivations, design choices, and further impact of our work.\\n\\n*Response to concern 1*:\\nWe agree with Reviewer #2 that there exists some recent work in HRL that allows for jointly training both levels of the hierarchy. Nevertheless, this is still not the norm, and many recent HRL works have a separate procedure to train skills and don\\u2019t show any fine-tuning: DADS (Sharma et al. 2019), DIAYN (Eysenbach et al. 2018), SNN4HRL (Florensa et al. 2017), etc. Our method can enhance any such two-step method, as shown in our Fig. 6. [originally Fig.5], introducing a principled way to adapt given skills learned in the first step. Any work using these techniques will benefit from our study.\\n\\nFurthermore, the end-to-end HRL methods cited by the reviewer, and all prior work known to us, suffer from either collapsing skills (e.g. Option-Critic, Bacon et al. 2017, and other option-based approaches), or from setting different reward functions for the lower level and higher-level parts of the hierarchy (HIRO, Nachum et al. 2018; FuN, Vezhnevetz et al. 2017, etc.). These approaches might hinder the learning performance in tasks that cannot be described as goal-reaching. We can see the limitations on learning from scratch of other \\u201cend-to-end\\u201d methods in Fig. 5 [originally Fig. 6 in the Appendix], where we show that both Option-Critic and HIRO (which has been shown to outperform FuN) greatly struggle in the more challenging tasks where our HiPPO algorithm shines. We have moved the figure to the main text, and have updated it with an additional comparison with MLSH (Frans et al. 2018) as Reviewer #564 suggested. \\n\\nIn Fig. 6, we compare the fine-tuning performance of HiPPO to prior hierarchical approaches, and we again find that HiPPO has better sample efficiency and asymptotic performance than Option-Critic, MLSH, and HIRO. We observe quick skill collapse for Option-Critic, and HIRO\\u2019s goal-reaching formulation fails to learn well on all four tasks. \\n\\n*Response to concern 2*:\\nWe agree with the reviewer that HRL encompasses a wide variety of approaches. We have clarified that our work focuses on improving the training of temporal hierarchies for problems that have been shown to benefit from it, like the long horizon locomotion+navigation tasks we tackle in our work. Let us know if we should make any more clarifications.\", \"in_terms_of_the_two_design_choices_made_in_our_paper\": [\"We propose the random length to avoid learning skills that overfit to a fixed time-switch, which makes the skill more brittle when deployed in an environment with different dynamics. The benefit of this design choice in zero-shot transfer is empirically demonstrated in our Table 1. We have also included hyperparameter sensitivity plots in the Appendix, which show that our method performs well for a large range of sensible time commitments - hence requiring little prior knowledge. Any temporal hierarchy can theoretically be suboptimal; however, our experiments show that despite the potential sub-optimality, HiPPO learns better policies than other methods without the random time-commitment.\", \"The assumption of our lemma is actually the following: conditioned on the state, the policy action is given high probability by only one skill. The skills don\\u2019t partition the action-space into constant sets for all states; rather, different skills fully utilize the same action space to compose sequences of actions into very different primitives. Our assumption is actually quite mild, as we show empirically in Table 2. Even HiPPO trained on random skills satisfies it!\", \"We hope we have addressed all of your concerns, and if any remain please let us know.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper is under the topic of hierarchical reinforcement learning. The motivation of this paper is \\\"most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task.\\\" The paper proposes a method to learn higher-level skill selection and lower-level skill improvement jointly.\", \"what_i_like_in_this_paper\": \"1. The paper, in general, is well-written so that I can understand it well.\\n 2. Experiments are question-driven and provide interesting results.\\n 3. Theories are closely related to the algorithm.\", \"key_reasons_for_my_rejection\": \"1. My biggest concern is the motivation of this paper. \\n The joint learning of higher-level policy and lower-level skill discovery is not rare in modern literature. Some works are even cited in this paper, for example, option-critic, feudal network, etc. These methods fix their skills in the new task, not because they are inherently not able to do so, but because they want to demonstrate that the learned skills can be reused in new tasks, even if there is no further adaptation. I agree with the author that the agent needs to adapt its skills when faced with new tasks. But I don't think most works are limited in this aspect, as claimed by the paper in the abstract.\\n\\n 2. I think the author didn't justify his key design choices well. \\n This paper is under the research area of \\\"hierarchical reinforcement learning.\\\" However, just like temporal abstraction, the HRL is a general idea instead of an existing problem formulation or a particular algorithm. It seems that the author is not aware of this point as the paper claims a particular way of achieving HRL is the HRL itself (in section 4.1 \\\"In the context of HRL, a hierarchical policy with a manager \\u03c0\\u03b8h(zt|st) selects every p time-steps one of n sub-policies to execute.\\\"). I would like to see the paper takes the responsibility to justify the reason it follows this particular way. There are two more key decisions the paper proposed but not fully justified and analyzed.\\n 1. Why is random length a valid choice? The paper doesn't tell readers the consequence of this design choice. For example, what about the optimality of the solution? Since bounding the random length needs prior knowledge, how difficult is it to come up with the prior knowledge. Is the algorithm sensitive to prior knowledge?\\n 2. Why is it fine to assume \\\"for each action, there is just one sub-policy that gives it high probability\\\"? What would be the consequence of this assumption? Well, the extreme case is each action is only being chosen by one sub-policy. Therefore, executing the sub-policy becomes executing a repeated sequence of the same action. Obviously, this is a kind of temporal abstraction but is a very limited one.\", \"other_small_issues\": \"\", \"section_2\": \"the horizon T in the definition in \\\\eta should be H.\", \"section_4\": \"the advantage function is not defined.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #564\", \"review\": \"The main problem they try to tackle is to train agents for unseen tasks and environmental changes. They show that their method has a better performance and is more robust against sensor errors and physical parameter alterations.\\n\\nThe authors clearly position their work in the HRL paradigm and explain current limitations/challenges within that field. Alike other HRL agents, their method has two types of policies (manager and subpolicies), but different from other works they do not keep parameters fixed in post training for new tasks. In addition to the parameters, they do not fix the time length. \\n\\nThe paper is very well written, clearly stated the contributions.\", \"remarks\": [\"Fig 2 has no caption. How are the colors of balls obtained, since they only explain how the sensors (lidar) measure distances to balls (bombs/apples).\", \"Fig 4/5a, some agents (blue) seems to have undesired behaviour (until half of the iterations). This behaviour is not described anywhere.\", \"The URL of the website with code and videos does not have any code.\"], \"questions_to_the_authors\": [\"Closest work is Frans et al. (2018). The experiments do not show Frans et al. as a benchmark method. Why?\", \"HiPPO shows to have a higher robustness. Why are the results of different methods (p=10/random) in Table 1 for different environments (Snake/Ant) different?\"]}"
]
} |
rkeeoeHYvr | AdvCodec: Towards A Unified Framework for Adversarial Text Generation | [
"Boxin Wang",
"Hengzhi Pei",
"Han Liu",
"Bo Li"
] | Machine learning (ML) especially deep neural networks (DNNs) have been widely applied to real-world applications. However, recent studies show that DNNs are vulnerable to carefully crafted \emph{adversarial examples} which only deviate from the original data by a small magnitude of perturbation.
While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating \emph{adversarial text} in the discrete domain is still challenging.
The main contribution of this paper is to propose a general targeted attack framework \advcodec for adversarial text generation which addresses the challenge of discrete input space and be easily adapted to general natural language processing (NLP) tasks.
In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. With the tree based decoder, it is possible to ensure the grammar correctness of the generated text; and the tree based encoder enables flexibility of making manipulations on different levels of text, such as sentence (\advcodecsent) and word (\advcodecword) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary \emph{targeted attack}. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results show that \advcodec has successfully attacked both tasks. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from $0.703$ to $0.006$, and a BERT-based QA model's F1 score to drop from $88.62$ to $33.21$ (with best targeted attack F1 score as $46.54$). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models. | [
"adversarial text generation",
"tree-autoencoder",
"human evaluation"
] | Reject | https://openreview.net/pdf?id=rkeeoeHYvr | https://openreview.net/forum?id=rkeeoeHYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"aO_H8El6XE",
"rygOacSKjH",
"HJeWfvBFjH",
"BJeL0LHtiB",
"Bkg4HIBtsH",
"HJglbUStoB",
"Skla6rHKsB",
"Byg5OXlgcr",
"HyeN5T9AtS",
"BJlwnxYpYr",
"S1lpcqm6YS",
"BJgacDp3FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_review",
"comment"
],
"note_created": [
1576798750421,
1573636799917,
1573635849212,
1573635789665,
1573635643658,
1573635576244,
1573635525292,
1571976050417,
1571888524304,
1571815598576,
1571793557362,
1571768213426
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2496/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2496/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2496/AnonReviewer2"
],
[
"~Andrey_Zharkov1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for generating text examples that are adversarial against a known text model, based on modifying the internal representations of a tree-structured autoencoder.\\n\\nI side with the two more confident reviewers, and argue that this paper doesn't offer sufficient evidence that this method is useful in the proposed setting. I'm particularly swayed by R1, who raises some fairly basic concerns about the value of adversarial example work of this kind, where the generated examples look unnatural in most cases, and where label preservation is not guaranteed. I'm also concerned by the fact, which came up repeatedly in the reviews, that the authors claimed that using a tree-structured decoder encourages the model to generate grammatical sentences\\u2014I see no reason why this should be the case in the setting described here, and the paper doesn't seem to offer evidence to back this up.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Response\", \"comment\": \"General Responses\\nWe thank the reviewers for their valuable comments and suggestions. Based on the review comments, we have revised Section 3 and Section 4 to make the presentation clearer. We also added 3 sections in the appendix and conducted additional experiments following the reviews\\u2019 suggestions.\\n\\nSpecifically, we made the following revisions:\\n1. We updated Section 1 to clarify our technical innovation and contributions.\\n2. We added more explanation on AdvCodec(Word) in Section 3.\\n3. We moved the scatter attack results from the appendix to Section 4.\\n4. We added a section to discuss the AdvCodec training details in Appendix A, including how to select a good autoencoder, how to train our tree autoencoder, how the attack is performed. We also added additional experiments on how to select the good initial seed for QA, and showed the untargeted scatter attack results for QA.\\n5. We added a section in Appendix B to discuss how classification models and QA models are trained along with their hyperparameter settings for the baseline attack methods.\\n6. We added a section in Appendix C to show more adversarial examples generated by our AdvCodec framework.\\n7. We fixed the typos and minor errors pointed out by the reviewers.\\n\\nPlease don\\u2019t hesitate to let us know if you have any additional comments.\"}",
"{\"title\": \"Response to Reviewer #2 (Part 1)\", \"comment\": \"Q1: \\u201cThe paper achieves good success rate based on its experimental results but doesn't convince me that 2) (the generated texts are reasonable (e.g. syntactically correct) and are not contradictory to the original texts) is also guaranteed.\\u201d\", \"a1\": \"We totally agree with reviewer #2 on the challenges of generating good adversarial texts. So we evaluate our adversarial sentences based on two metrics: \\n1) the linguistic quality;\\n2) human accuracy comparison based on benign and adversarial texts, as illustrated in Section 5.\\nFor 1) we calculate the ratio of the generated adversarial texts that can be recognized as \\u201cnatural\\u201d by human to evaluate the linguistic quality.\\nFor 2) we record the accuracy of human performance on tasks (e.g. classification and QA) based on both benign and adversarial texts as shown in Table 10 and 11.\\nSo far the above metrics are what we can come up with and they are also standard to validate the adversarial examples for NLP domains, which have also been used in other state-of-the-art adversarial text generation work [2][3][4].\\n\\nEmpirically, we first confirm AdvCodec(Sent) under the tree constraints are syntactically correct, which is demonstrated by the human study in Section 5.1. Then we verify that \\u201cour adversarial sentences do not contradict the original texts\\u201d via human evaluation in Section 5.2 and show that our adversarial datasets do not significantly affect human judgment. We can also evaluate the generated adversarial text quality by looking at the samples in table 1 and the updated Appendix C. \\n\\u2014\", \"q2\": \"\\u201cThe paper mentioned that human can ignore irrelevant tokens added by the proposed scatter attack method but it is an extra assumption added to the grammatical correctness.\\u201d\", \"a2\": \"Thank you for pointing it out and we are sorry to make the confusion. This is actually our another interesting discovery based on human evaluation: we find that by adding scatter words the human performance on these generated texts will not be largely affected. We admit that the scatter attack cannot ensure grammatical correctness since it does not consider the global syntactic constraints and only manipulates on the word level, and it is just another discovery and we have made this clear in the revision. \\nTo ensure better grammatical correctness, we suggest using AdvCodec(Sent) whose language quality is confirmed by human readers.\\n\\u2014\", \"q3\": \"\\u201cIs scatter attack not effective to attack QA task?\\u201d\", \"a3\": \"Thank you for the interesting question. Based on the suggestion, we conducted additional experiments by performing the scatter attack on QA. Indeed, we find that the targeted attack success rate is not satisfactory. It turns out QA systems highly rely on the relationship between questions and contextual clues, which is hard to break when setting an arbitrary token to a target answer. This is also why we use some heuristics to creating a similar fake context when initializing QA appended sentence. We have made it clear in Section 4.2.\\n\\nWe also performed the untargeted scatter attack on QA. The results are shown in Table 13 in Appendix A.3. We insert 30 random tokens (but no more than 1/3 the total words of the paragraph) over the paragraph, and optimize the adversarial tokens to mislead the model. We observe that the untargeted scatter attack can achieve a higher untargeted attack success rate (adversarial F1 of 49.7) than Jia & Liang (adversarial F1 of 52.6) [4]\\n\\u2014\", \"q4\": \"\\u201cthe paper reports the human evaluation on adversarial texts which shows accuracy degradation and low votes. Ideally, the human accuracy on adversarial texts should also be compared to justify 2). More examples can be added to reduce \\\"noise\\\" mentioned in the paper.\\u201d\", \"a4\": \"Thanks for the suggestions. \\nWe compare the human accuracy on both benign and adversarial texts for both tasks (QA and classification) in the revision section 5.2 with more samples in Appendix C.\\nThe human performance drops a bit on adversarial texts.\\nIn particular, it drops around 10% for both QA and classification tasks based on AdvCodec as shown in Tables 10 and 11. We believe this performance drop is tolerable and the stoa generic based QA attack algorithm experienced around 14% performance drop for human performance [4].\\nIn addition, we also discuss other reasons for the human performance drop in the appendix B.4. Possible factors include the (majority vote) aggregation noise, length of paragraph and sampling randomness.\\n\\u2014\"}",
"{\"title\": \"Response to Reviewer #2 (Part 2)\", \"comment\": \"Q5: \\u201cin figure 1, will you encode the original text along with the appended sentence into one vector? then, how do you guarantee that the perturbation only applies to the appended sentence but not the original text for the ADVCodec(sent)? or the original text will be reproduced due to the autoencoder?\\u201d\", \"a5\": \"Thank you for the interesting question and sorry for the confusion. We have made it clear in the revision that we will not encode the original text. Original text will not be perturbed or modified under any circumstances: we only add perturbation to the appended sentence for the concat attack; and we only manipulate on the scattered words for scatter attack while keeping original tokens unperturbed by masking out the perturbation on them. We have added more details in Section 3.3.\\n\\u2014\", \"q6\": \"\\u201cit will be helpful to add more details on training and optimization. For example, is the autoencoder trained by the authors or is from the existing model? what does the confidence score in (5) means empirically and how to choose its value?\\u201d\", \"a6\": \"Thanks for the suggestions. We have added more details on training and optimization in Appendix A.1 and A.2. The tree autoencoder is trained by us because the tree autoencoder is based on the novel tree decoder proposed by us. The confidence score is chosen via binary search to search for the optimal tradeoff-constant between the target perturbation magnitude and the attack confidence, which follows the optimization-based attack [1].\\n\\n[1] Carlini, Nicholas and David A. Wagner. \\u201cTowards Evaluating the Robustness of Neural Networks.\\u201d 2017 IEEE Symposium on Security and Privacy (SP) (2016): 39-57.\\n[2] Iyyer, Mohit, John Wieting, Kevin Gimpel and Luke S. Zettlemoyer. \\u201cAdversarial Example Generation with Syntactically Controlled Paraphrase Networks.\\u201d NAACL-HLT (2018).\\n[3] Jin, Di, Zhijing Jin, Joey Tianyi Zhou and Peter Szolovits. \\u201cIs BERT Really Robust? Natural Language Attack on Text Classification and Entailment.\\u201d ArXiv abs/1907.11932 (2019): n. pag.\\n[4] Jia, Robin and Percy Liang. \\u201cAdversarial Examples for Evaluating Reading Comprehension Systems.\\u201d EMNLP (2017).\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for recognizing the novelty and contribution of our paper.\\nQ 1.1: \\u201cit is not clear to me why the proposed method would not change the ground truth answer for QA.\\u201d \\nA 1.1: \\n1) Thanks for the interesting question. In fact, we only append an adversarial sentence/ scattering adv tokens into the original text without editing any original words. When searching for the optimal adversarial sentence, we keep the optimization steps until the adversarial sentence and context sentence are disjoint. So ideally the adversarial dataset has the same answers with the original dataset. And our human evaluation in Section 5.2 also confirms that human readers can still find the correct answers (ground truth) even with adversarial sentences appended.\\n\\nQ 1.2: \\u201cthe authors claim to achieve this by carefully choosing the initial sentence as the initial point of optimization, which seems a bit heuristic.\\u201d\\nA 1.2:\\nWe conducted additional experiments by using different initial sentences based on the suggestion and added more discussion on how we select the initial seed to attack QA in Appendix A.4. The conclusion is we observe using different initialization sentences will greatly affect the attack success rates. Therefore, the initial sentence selection is indeed important to help reduce the number of optimization iterations and guarantee to converge to the optimal $z^*$ efficiently. \\n\\nWe also would like to emphasize this heuristic step is the very first step of our framework followed by a series of optimization steps to ensure the ground truth is not changed. In this paper, we ensure our appended adversarial sentences are not contradictory to the ground truth by a) choosing an initial sentence as the initial seed of optimization, b) adding perturbation to the sentence, c) searching for the optimal adversarial sentence, d) ensuring that the adversarial sentence and context sentence are disjoint, otherwise keep the iteration steps. If the maximum steps are reached, the optimization is regarded as a failure. \\n\\nQ 1.3: \\u201cmore experimental results to justify this claim.\\u201d\\nA 1.3: Thank you for the suggestion, and we have added more experiments in Appendix A.4 to discuss the initial seed selection. To support that our appended adversarial sentences/ scattered tokens are not contradictory to the ground truth, we conduct the human evaluation in Section 5.2, which verifies our adversarial dataset is compatible with the original answers and barely affects human judgments.\"}",
"{\"title\": \"Response to Reviewer #1 (Part 2)\", \"comment\": \"Q5: \\u201cit is unclear that the majority answers on the adversarial text will, respectively, match the majority answers on the original text\\u2026 whether the proposed framework can generate legitimate adversarial text to human readers or not.\\u201d\", \"a5\": \"Thanks for pointing this out and we have added the corresponding discussion in revision Section 5.2.\\nThe human performance drops around 10% for both QA and classification tasks in Table 10 and 11. We believe this performance drop is tolerable and the stoa generic based QA attack algorithm experienced around 14% performance drop for human performance [4].\\nIn addition, we also discuss other reasons for the human performance drop in the appendix B.4: Possible factors include the (majority vote) aggregation noise, length of paragraph and sampling randomness.\\n\\u2014\", \"q6\": \"\\u201cput examples in the appendix.\\u201d\", \"a6\": \"We have put more generated adversarial examples in Appendix C, and thank you for the helpful suggestions.\\n\\u2014\", \"q7\": \"\\u201cMissing training details: It is unclear how the model architectures are chosen, and learning rate, optimizer, training epochs etc. are also missing. All these training details should be included in the appendix.\\u201d\", \"a7\": \"We have added model settings and training details in Appendix B and thanks for the suggestion.\\n\\u2014\", \"q8\": \"\\u201cminor errors: \\\"Append an initial sentence...\\\", section 3: \\\"map discrete text into a high dimensional...\\\", section 3.2.2: \\\"Different from attacking sentiment analysis...\\\" ....\\u201d\", \"a8\": \"Thank you for pointing these out and we have fixed the typos in the revision.\\n\\n[1] Welleck, Sean, Kiant\\u00e9 Brantley, Hal Daum\\u00e9 and Kyunghyun Cho. \\u201cNon-Monotonic Sequential Text Generation.\\u201d ICML (2019).\\n[2] Iyyer, Mohit, John Wieting, Kevin Gimpel and Luke S. Zettlemoyer. \\u201cAdversarial Example Generation with Syntactically Controlled Paraphrase Networks.\\u201d NAACL-HLT (2018).\\n[3] Jin, Di, Zhijing Jin, Joey Tianyi Zhou and Peter Szolovits. \\u201cIs BERT Really Robust? Natural Language Attack on Text Classification and Entailment.\\u201d ArXiv abs/1907.11932 (2019): n. pag.\\n[4] Jia, Robin and Percy Liang. \\u201cAdversarial Examples for Evaluating Reading Comprehension Systems.\\u201d EMNLP (2017).\\n[5] Sutskever, Ilya, Oriol Vinyals and Quoc V. Le. \\u201cSequence to Sequence Learning with Neural Networks.\\u201d NIPS (2014).\"}",
"{\"title\": \"Response to Reviewer #1 (Part 1)\", \"comment\": \"Q1: \\u201cAlthough the studied problem in this paper is interesting, the technical innovation is very limited. All the techniques are standard or known. \\u201d\", \"a1\": \"Thank you for pointing this out, and we will make our contribution clear in the revision. We would like to emphasize our main technical innovations as below:\\n1) We design a novel tree **decoder** to decode latent vectors into natural languages which can not only guarantee the syntax correctness, but also achieves the property of non-monotonic order which is also discussed in [1].\\n2) We also design a novel framework to generate adversarial text on different levels (e.g. word and sentence) by combining a tree LSTM encoder with the proposed tree based decoder. In particular, we automatically leverage the tree autoencoder to map the discrete text into latent space, generate adversarial perturbation on selected instances, and decode it with our tree based decoder to ensure grammatical correctness. (This novelty is also mentioned by reviewer #2.)\\n3) We also propose and explore novel adversarial settings, including scatter attack for classification and targeted attack for QA, which provides diverse ways to evaluate the robustness of existing NLP models. We believe with our general framework which will be open-source soon, it will help the community to further understand the vulnerabilities of current NLP models.\\n4) In addition, we have conducted extensive experiments, including adversarial attacks on QA which has not been evaluated by efficient optimization algorithms, and novel BERT based classifier and QA models. Our novel observations such as BERT is less robust than BiDAF and self-attentive models can provide more insights towards evaluating the robustness of various models. \\n--\", \"q2\": \"\\u201clacking a rigorous metric of human unnoticeability\\u201d\", \"a2\": \"Thank you for the comment and we will describe our evaluation metrics clear in revision.\\nIn particular, we conduct two types of human evaluation to measure the human sensitivity to our adversarial examples in terms of 1) the linguistic quality and 2) human accuracy comparison based on benign and adversarial texts, as illustrated in Section 5.\\nFor 1) we calculate the ratio of the generated adversarial texts that can be recognized as \\u201cnatural\\u201d by human to evaluate the linguistic quality.\\nFor 2) we record the accuracy of human performance on tasks (e.g. classification and QA) based on both benign and adversarial texts as shown in Table 10 and 11.\\nSo far the above metrics are what we can come up with and they are also standard to validate the adversarial examples for NLP domains, which have also been used in other state-of-the-art adversarial text generation work [2][3][4]. \\n--\", \"q3\": \"\\u201clacking justification of the advantage of the tree-based autoencoder\\u2026 unclear why tree-structured LSTM instead of a standard LSTM/GRU should be chosen in this framework for adversarial text generation. If this architecture is preferred, sufficient ablation studies should be conducted.\\u201d\", \"a3\": \"Thank you for the helpful suggestion, we will clarify the advantages of the tree-LSTM first and we have also conducted the suggested ablation studies.\\n1) The advantages of the tree-based autoencoder are:\\na) grammar rules are integrated directly based on the tree structures, thus it can intrinsically guarantee the grammar correctness of generated texts. This is also confirmed by the human study in Section 5.1 that AdvCodec(Sent) generated adversarial text has higher language quality and ensures syntactically correctness;\\nb) The tree structure allows us to flexibly modify the node embedding at different node levels in order to generate controllable perturbation on words or sentences. \\n\\n2) In addition, we conducted the suggested ablation studies: we leverage the standard LSTM architecture [5] and generate adversarial perturbation. We add the ablation study results in appendix A in revision. The experimental results show that LSTM based autoencoder can neither achieve high attack efficiency (The adversarial F1 score is 57.5 with LSTM on BiDAF, compared with 17.6 by AdvCodec -- lower the better) nor guarantee the correct syntactic structures. \\n\\u2014\", \"q4\": \"\\u201cthe description about adversarial attacks at word level is unclear. More detailed loss function and algorithms along with equations should be provided.\\u201d\", \"a4\": \"Thanks for the suggestion, and we have added more details and corresponding notations/equations in the revision Section 3.3 along with a pseudo-code in Appendix A.2.\\nIn particular, the difference between word level and sentence level manipulation is the meaning of context vector z (in figure 1). For the word-level attack, the context vector $z$ are the concatenation of leaf node embedding (which corresponds to each word):\\n$z = [z_1, z_2, \\u2026, z_n]$\\nAdvCodec(Word) has the same optimization function against QA and classification tasks by manipulating the latent representation z. \\n\\u2014\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Motivated by recent development of attack/defense methods addressing the vulnerability of deep CNN classifiers for images, this paper proposes an attack framework for adversarial text generation, in which an autoencoder is employed to map discrete text to a high-dimensional continuous latent space, standard iterative optimization based attack method is performed in the continuous latent space to generate adversarial latent embeddings, and a decoder generates adversarial text from the adversarial embeddings. Different generation strategies of perturbing latent embeddings at sentence level or masked word level are both explored. Adversarial text generation can take either a form of appending an adversarial sentence or a form of scattering adversarial words into different specified positions. Experiments on both sentiment classification and question answering show that the proposed attack framework outperforms some baselines. Human evaluations are also conducted.\", \"pros\": \"This paper is well-written overall. Extensive experiments are performed.\\n\\nMany human studies comparing different adversarial text generation strategies and evaluating adversarial text for sentiment classification/question answering are conducted.\", \"cons\": \"1) Although the studied problem in this paper is interesting, the technical innovation is very limited. All the techniques are standard or known. \\n\\n2) There are two major issues: lacking a rigorous metric of human unnoticeability and lacking justification of the advantage of the tree-based autoencoder. I think the first issue is a major problem that renders all the claims in this paper questionable. The metrics used to define adversarial images for deep CNN classifiers are indeed valid and produce unnoticeable images for human observers. But in this paper, the adversarial attack is performed in the latent embedding space, and there is no explicit constraint enforced on the output text. It\\u2019s unconvincing that this approach will generate adversarial text that seems negligible to humans. Therefore, the studied problem in this paper has a completely different nature from the one for CNN image classifiers and it is hard to convince readers that the proposed framework generates adversarial text legitimate to human readers. \\n\\n3) It is unclear why tree-structured LSTM instead of a standard LSTM/GRU should be chosen in this framework for adversarial text generation. If this architecture is preferred, sufficient ablation studies should be conducted.\\n\\n4) In section 3.3, the description about adversarial attacks at word level is unclear. More detailed loss function and algorithms along with equations should be provided.\\n\\n5) In section 5.2, it is unclear that the majority answers on the adversarial text will, respectively, match the majority answers on the original text. Moreover, it seems that there is a large performance drop from original text to adversarial text. Therefore, it is valid to argue that whether the proposed framework can generate legitimate adversarial text to human readers or not.\\n\\n6) It\\u2019s better to include many examples of generated adversarial text in the appendix.\\n\\n7) Missing training details: It is unclear how the model architectures are chosen, and learning rate, optimizer, training epochs etc. are also missing. All these training details should be included in the appendix.\\n\\n8) Minor: Figure 1: \\\"Append an initial sentence...\\\", section 3: \\\"map discrete text into a high dimensional...\\\", section 3.2.2: \\\"Different from attacking sentiment analysis...\\\" ....\\n\\nIn summary, the research direction of adversarial text generation studied in this paper is interesting and promising. However, some technical details are questionable, and the produced results without rigorous metrics seem to be unconvincing.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new attack framework AdvCodec for adversarial text generation. The main idea is to use a tree-based autoencoder to embed text data into the continuous vector space and then optimize to find the adversarial perturbation in the vector space. The authors consider two types of attacks: concat attack and scatter attack. Experimental results on sentiment analysis and question answering, together with human evaluation on the generated adversarial text, are provided.\\n\\nOverall, this paper has a nice idea: use tree autocoders to embed text into vector space and perform optimization in the vector space. On the other hand, it is not clear to me why the proposed method would not change the ground truth answer for QA. Currently the authors claim to achieve this by carefully choosing the initial sentence as the initial point of optimization, which seems a bit heuristic. The authors could add more discussion on this and more experimental results to justify this claim.\"}",
"{\"comment\": \"We thank the commenter for the interesting observations and comments.\\n---\", \"q1\": \"\\u201cin the provided several examples the grammar of the sentences is violated by scattered word.\\u201d\", \"a\": \"Thank you for the interesting observation.\\nWe expect that adding the strong sentiment related words as suggested (e.g. \\u201cnot\\u201d for negatives) would be able to attack the model. However, here we hope the manipulation would be subtle so that the adversarial sentence will not easily fool humans.\\nFor instance, in our setting we use \\u201cthe\\u201d as initial seeds to *randomly* scatter over the paragraph, so it would be quite rare to manipulate the token to be \\u201cnot\\u201d in the appropriate positions. \\n\\nIn addition, we just ran a small experiment to explore this case as suggested. Over 600 successful adversarial (under scatter attack) paragraphs, we find that there is *only one paragraph* where the human *made a mistake* which indeed a \\u201cnot\\u201d is appended to an Adverb.\", \"q2\": \"\\u201cAnd concatenated sentences are also imperfect (\\\"chickens and chickens\\\").\\u201d\", \"q3\": \"\\u201c I find it suspicious that these \\\"noisy\\\" samples forms about 10-15% \\u2026 the proposed adversarials do change the meaning in fact in more than 10% of all cases.\\u201d\", \"q4\": \"\\u201cOne option where adversarials will change the meaning I can think of is the insertion of \\\"not\\\" word in appropriate positions. Have you noticed such situations?\\u201d\", \"references\": \"[1] Jia, Robin and Percy Liang. \\u201cAdversarial Examples for Evaluating Reading Comprehension Systems.\\u201d EMNLP (2017).\", \"title\": \"Response to the grammar violations\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed a new adversarial text generation framework based on tree-structured LSTM. Compared with two existing methods, the proposed method gives better successfully attacking rates. The tree-structured LSTM model is an existing work but applying it to generate adversarial text is new.\\n\\nThe difficulty of generating good adversarial text lies 1) high success rate and 2) the generated texts are reasonable (e.g. syntactically correct) and are not contradictory to the original texts. The paper achieves good success rate based on its experimental results but doesn't convince me that 2) is also guaranteed. The paper mentioned that human can ignore irrelevant tokens added by the proposed scatter attack method but it is an extra assumption added to the grammatical correctness. The classification model was trained on texts without these randomly added tokens or typos. In the results, I saw the scatter attack was applied to sentiment analysis but not QA tasks. Is this method not effective to attack QA task?\\nAlso, the paper reports the human evaluation on adversarial texts which shows accuracy degradation and low votes. Ideally, the human accuracy on adversarial texts should also be compared to justify 2). More examples can be added to reduce \\\"noise\\\" mentioned in the paper. And, the paper can be improved by adding more details on training and optimization.\\n\\nSome extra questions and comments\\n1. in figure 1, will you encode the original text along with the appended sentence into one vector? then, how do you guarantee that the perturbation only applies to the appended sentence but not the original text for the ADVCodec(sent)? or the original text will be reproduced due to the autoencoder?\\n2. it will be helpful to add more details on training and optimization. For example, is the autoencoder trained by the authors or is from the existing model? what does the confidence score in (5) means empirically and how to choose its value?\"}",
"{\"comment\": \"It seems that even in the provided several examples the grammar of the sentences is violated by scattered word. And concatenated sentences are also imperfect (\\\"chickens and chickens\\\").\\n\\nFrom section 5.2 where you investigate human performance on adversarials \\\"While we can spot a drop from the benign to adversarial datasets, we conduct an error analysis in QA and find the error examples are noisy and not necessarily caused by our adversarial text.\\\" I find it suspicious that these \\\"noisy\\\" samples forms about 10-15% of entire dataset (according to tables 9 and 10). These results show that the proposed adversarials do change the meaning in fact in more than 10% of all cases.\\n\\nOne option where adversarials will change the meaning I can think of is the insertion of \\\"not\\\" word in appropriate positions. Have you noticed such situations?\", \"title\": \"Grammar and meaning violations in adversarial examples\"}"
]
} |
ByxloeHFPS | PROVABLY BENEFITS OF DEEP HIERARCHICAL RL | [
"Zeyu Jia",
"Simon S. Du",
"Ruosong Wang",
"Mengdi Wang",
"Lin F. Yang"
] | Modern complex sequential decision-making problem often both low-level policy and high-level planning. Deep hierarchical reinforcement learning (Deep HRL) admits multi-layer abstractions which naturally model the policy in a hierarchical manner, and it is believed that deep HRL can reduce the sample complexity compared to the standard RL frameworks. We initiate the study of rigorously characterizing the complexity of Deep HRL. We present a model-based optimistic algorithm which demonstrates that the complexity of learning a near-optimal policy for deep HRL scales with the sum of number of states at each abstraction layer whereas standard RL scales with the product of number of states at each abstraction layer. Our algorithm achieves this goal by using the fact that distinct high-level states have similar low-level structures, which allows an efficient information exploitation and thus experiences from different high-level state-action pairs can be generalized to unseen state-actions. Overall, our result shows an exponential improvement using Deep HRL comparing to standard RL framework. | [
"hierarchical model",
"reinforcement learning",
"low regret",
"online learning",
"tabular reinforcement learning"
] | Reject | https://openreview.net/pdf?id=ByxloeHFPS | https://openreview.net/forum?id=ByxloeHFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Pckrzrm8G",
"Syl87c5noB",
"HkealYchiH",
"rklhDPqhiS",
"SJgUpKLeqH",
"BJxd6v70YS",
"rkg7VVswKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750392,
1573853726145,
1573853428921,
1573853028156,
1572002238415,
1571858367733,
1571431466652
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2495/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2495/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2495/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2495/AnonReviewer4"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper pursues an ambitious goal to provide a theoretical analysis HRL in terms of regret bounds. However, the exposition of the ideas has severe clarity issues and the assumptions about HMDPs used are overly simplistic to have an impact in RL research.\\nFinally, there is agreement between the reviewers and AC that the novelty of the proposed ideas is a weak factor and that the paper needs substantial revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the questions and suggestions.\\nWe want to emphasize that our goal is to formalize the deep hierarchical reinforcement learning problem and give a provably efficient algorithm for this setting. The main focus is theoretical, and we do not claim to beat any SOTA algorithm.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the questions and suggestions. We have revised our paper and fixed typos. Please find out responses to your comments below.\\n1. For autonomous driving, if we assume that each road has the same lengths, and our vehicle needs to make a decision after going a certain distance, then indeed the number of decision steps between interactions is fixed. When the lengths of streets are of the same lengths, as long as they are straight like roads in Manhattan, our model is also suitable after slight modification.\\n2. The episodic way given in our model is a different explanation of the hierarchical model in comparison to models like option MDP, where jumps between layers happen when meeting the stopping criterion. Our model is more suitable for a situation like autonomous driving, or some computer games where we have a time limit in each challenge since in these cases, the number of steps in each layer is fixed.\\n3. The \\u201cdeep\\u201d in our model means deep layers of hierarchy, instead of algorithms using deep learning or deep RL.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for these questions and suggestions. We have revised our paper and fixed typos. Please find our responses to your questions below.\\n1. In our model, we assume the transition model shares the hierarchical structure, but the reward can be arbitrary (the reward is defined as $r(s_1, s_2, \\u2026, s_L, a)$, which is a function of states at every level and the action). Hence we have to plan on the product state space and make actions for all states at all levels.\\n2. Here we use indexed Q and V to denote the value functions at different horizons, which are common notations for finite-horizon MDP. As for the notation $h=(h_1, \\u2026, h_L)\\\\in [1, H]$, we actually means $h=h_L+h_{L-1}*H_L+\\u2026+h_1*H_2*H_3*\\u2026*H_L$, which is the lexicographical number of tuple $(h_1, \\u2026, h_L)$. The lexicographical tuple next to $h = (h_1, \\u2026, h_L)$ is $(h_1, \\u2026, h_l+1, 1, 1, \\u2026, 1)$ if $l$ is the largest index such that $h_l < H_l$ (meaning tuple $h=(h_1, \\u2026, h_l, H_{l+1}, ..., H_L)$). Also, in the previous tuple, we use $l=\\\\sigma(h)$ to denote the level where the carry happens. \\n3. In our model, we use \\u201cdeep hierarchical RL\\u201c to denote the model with many layers requiring planning.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the theoretical aspects of HRL. It provides theoretical analysis for the complexity of Deep HRL. The idea is to exploit a given action hierarchy, and known state decomposition, the fact that the high-level state space shares similar low-level structures. The final result is an exponential improvement of HRL to flat RL.\\n\\nOverall, the paper pursues an ambitious goal that analyses the complexity of Deep HRL. The writing is not easy to follow. I some questions and concerns as follows\\n\\n- I wonder why the state space must be defined in a product form? If a standard RL is used, then it could be applied directly to the state space ($S_L$) on that primitive actions operate. Hence L-1 state spaces will be discarded? I don't see why a flat RL must estimate policies for states at all levels. It looks like many later derivations based on the assumption of factored state spaces and factored transitions on different levels. In the case of factored representation, the authors should make clear assumptions and find a better way to describe the overall algorithm.\\n\\n- Section 3.2: the authors use time index for Q and V, does that mean all analysis is for non-stationary MDPs? This is not the assumption in Jaksch et al. (2010) and this paper. The description in this section is very confusing and contains a lot of imprecise definitions\\ne.g. should H = \\\\prod {i=1} H_i?? is h =(h_1,...h_L) not in [1,H]? what is the definition of the immediate next lexicographical tuple? etc. The definition of \\\\sigma is also unclear and hard to understand.\\n\\n- The analysis in Section 4. and Algorithm 1 are not for Deep HRL as said in Abstract and Introduction. The analysis is based on PAC-MDP learning for models at each action level. This paper's contributions might be clearer if the authors made clearer assumptions, e.g. on action hierarchy, abstract state space structures etc..\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new kind of episodic finite MDPs called \\\"deep hierarchical MDP\\\" (hMDP). An L-layer hMDP can be *roughly* thought of as L episodic finite MDPs stacked together. A variant of UCRL2 [JOA10] is proposed to solve these hMDPs and some results from its regret analysis are provided.\", \"pros\": \"1. The essential result (Theorem 4.1) on the regret bound of the proposed algorithm seems correct. I have not checked the proofs in detail but in part because it does not seem surprising and that a precise assessment is hindered by many typos (see Min2 and Con2).\\n\\nCons (in descending order of their weights in my decisions):\\n\\n1. The proposed hMDPs do _not_ seem to capture important features or challenges in hierarchical RL. My understanding is that the transitions in hMDPs work _like_ a clockwork (more on this in Mis6), the algorithm interacts with the sub-MDPs at each layer in turns according to their fixed horizons H_l's. This structure is very rigid temporally and seems to exclude the mentioned example of autonomous driving: the number of decision steps between intersections would be fixed.\\n\\n2. There are many (typographical) errors in both the text and mathematical expressions. Some of them are more severe than others hindering understanding. \\n\\n3. Possible as a consequence of Con2, some quantities defined seem unclear or incorrect at worst. For example, the \\\"standard regret\\\" defined in (2) is an expectation, not a random variable as in convention.\\n\\n4. There are some notable deviations from similar settings in prior works. They might be worthwhile innovations but their significance or motivations is omitted. For example, the rewards in hMDPs are defined as a function of the full state, i.e. in general not decomposable to rewards on the states of each layer, yet the analogy for hMDP is \\\"L levels of episodic MDPs.\\\"\\n\\nA non-exhaustive list of obvious mistakes/typos:\\n1. In the title, \\\"Provably\\\" -> Provable.\\n2. In the abstract, \\\"often both\\\" -> often requires both.\\n3. In Organization, \\\"theoremm\\\" -> theorems.\\n4. In Section 2, \\u201cbetween exploration\\u201d -> between exploration and exploitation.\\n5. Above Section 3, \\\"carried\\\" -> carried out.\\n6. Below (1), \\\"amount reward\\\" -> amount of reward.\\n7. The definition of horizon H is incorrect. Consider H_1 = 2 and H_2 = 3, the algorithm will interact with the sub-MDPs in the following order within one episode: 1, 1, 2, 1, 1, 2, 1, 1, 2. There are 9 steps not 6 = 2 * 3 as defined.\\n8. Section 3.3, \\\"able accumulate\\\" -> able to accumulate.\\n9. Section 3.3, the definition of V_h^\\\\pi, there should be not \\\\max.\\n10. (5), \\\"H\\\" -> H - h.\\n11. Section 6, \\\"tabular R\\\" -> tabular RL.\\n12. In References, \\\"Posterior sampling for reinforcement learning: worst-case regret bounds\\\" -> Optimistic posterior sampling for reinforcement learning: worst-case regret bounds.\\n13. In References, \\\"Temporal abstraction in reinforcement learning\\\" should be cited as a PhD thesis.\\n\\nSome other possible errors/inconsistencies:\\n1. Related work listed regret bounds from prior works (the presentation closely mirrors that of [JABJ18]) assume an episodic MDP with non-stationary transitions, i.e. P_t \\u2260 P_{t'} in general. However, in 3.1 the transitions are stationary. Relatedly, regardless of the stationarity of the transitions, there may not be an optimal _stationary_ policy in an episodic MDP contrary to the claim in the paper.\\n2. Indexing seems inconsistent near the top of page 3. The initial state is s_0 but the trajectory starts with s_1. \\n3. Near the top of page 3, V_h^\\\\pi and Q_h^\\\\pi should sum from h'=h, not h'=1. I assume that the authors intend to define h-step values (to appear in the Bellman equations).\\n4. Section 3.3, what are the k's in the equations? \\n5. (6), what is n(k-1, e)?\\n\\nMinor (factored little to none in my decision):\\n1. The claim in Introduction that some games \\\"do not require high-level planning\\\" while others do is highly speculative and vague. Note that any policy can be written a function with codomain in the primitive actions. In fact, many people thought to solve a game like chess or Go requires some temporal hierarchy (opening, mid-game, and end-game).\\n2. The comparison to running UCRL2 on hMDP ignoring the given structure seems weak. Given the knowledge of the particular clockwork-like structure of hMDP at each layer (horizons, states, actions), the natural attempt would be run O(L) copies of UCRL2, one for each sub-MDP (under different terminating states of the immediately lower sub-MDP). Frankly, in my understanding, that seems to be roughly what the authors propose as the solution (thus the results unsurprising). Moreover, it is not immediately clear that UCRL2 can apply to the proposed setting of hMDP without checking regular conditions like communicating (diameter being finite).\\n3. The claim that RL with options \\u201ccan be viewed as a two-layer HRL\\u201d needs much elaboration if not correction. Note that in the former, primitive actions are always taken in the original MDP at consecutive steps. \\n4. There is a limited relevance to deep learning or deep RL central to the themes at ICLR, i.e. the general issue of representation. This work may be more suitable for other general ML venues.\\n\\nSome suggestions\\n\\nI agree with the authors' sentiment that our theoretical understanding of hierarchical RL is relatively limited. I applaud the authors' effort to address this limitation. But judging from this aim of advancing our theoretical understanding, I think the paper may be improved by \\n\\n1. better articulating the motivations for hMDPs (concrete examples would help)\\n\\n2. contextualizing hMDPs with respect to other well-known models such as semi-MDPs (technical and precise comparison would help).\\n\\nTo put it in a different way, it is unclear to the readers why we want to solve this special class of hMDPs and what does hMDPs have to do with the general issues in hierarchical RL. Technically, I feel that assuming episodicity seems against the spirit of hierarchical RL where subtasks are often delimited by their subgoals instead of durations.\\n\\nIn conclusion, I cannot recommend accepting the current article. \\n\\n(To authors and other reviewers) Please do not hesitate to directly point out my misunderstandings if there is any. I am open to acknowledging mistakes and revising my assessment accordingly.\", \"post_rebuttal_update\": \"Thank you for replying to my review and incorporating some of my suggestions into your revision. However, I found many concerns (and mistakes) unaddressed, such as Mis7. The use of driving in Manhattan as an example troubles me because even stopping for a traffic light seems to disrupt the fixed temporal hierarchy of decisions. In conclusion, I will maintain my recommendation.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper performs a regret analysis for a new hierarchical reinforcement learning (HRL) algorithm that claims an exponential improvement over applying a naive RL approach to the same problem. The proposed algorithm and the regret analysis performed seem rigorous and well-thought out.\\n\\nHowever, I think that this paper should be rejected because (1) the algorithm does not appear to be a substantial improvement over existing algorithms, (2) the paper makes strong claims about an exponential improvement over standard RL, but doesn't provide a strong benchmark to compare to, and (3) the paper is imprecise and unpolished, with many grammatical errors.\\n\\nI would be open to reconsidering my score if a) the authors submit a revised version with significantly cleaned up text, and b) if the authors could provide more information about how their contribution compares to the existing literature.\\n\\nMain argument \\n\\nThe paper would benefit from establishing stronger context for the central contributions of their paper. For instance, the paper begins by contrasting HRL approaches with a number of standard RL algorithms, saying that approaches such as AlphaGo do not require high-level planning. This seems surprising; many RL researchers would describe MCTS (the base of the AlphaGo algorithm) as performing planning. It would be great if the authors could go into more detail as to what they view as planning, and why AlphaGo does not do so.\\n\\nAdditionally, the main comparison the authors seem to make is between HRL and naive RL, which does not provide sufficient context to properly analyse their algorithm. Many algorithms are better than applying a classical RL algorithm naively. As such, it is not sufficient to show that the algorithm proposed by the authors is stronger than a naive approach; it would be better to compare the algorithm to either a) the state of the art (SOTA) approach, or b) a more credible approach than the naive one. Experimental evidence would help.\\n\\nOne point of comparison is Fruit et al. (2017), which is mentioned as another paper which carries out a regret analysis in a HRL setting. Fruit et al. (2017) contains a number of simple numerical simulations; a similar effort here would help.\\n\\nAnother issue is that the paper is confusing, with systematic grammar errors and typos. The paper would benefit significantly with some copy-editing/proofreading by a native English speaker. For instance, the title should (presumably) read \\\"Provable Benefits of Deep Hierarchical RL.\\\" Such errors appear throughout the paper. Fixing them would make the paper much easier to understand.\\n\\nFinally, although this did not factor into the score I awarded the paper, the terminology used by the authors is confusing, referring to their setting as \\\"Deep Hierarchical Reinforcement Learning.\\\" \\\"Deep Reinforcement Learning\\\" is a widely used term in industry, referring to algorithms that apply Deep Learning to RL problems, such as AlphaGo or DeepStack. I would encourage the authors to use a different term to describe the setting.\", \"questions_to_the_authors\": \"1) In what way is AlphaGo not doing planning? What is an example of an algorithm that does planning in a standard RL setting? e.g. what would planning look like in Go?\\n2) Did you run any experiments/simulations of your work? If not, why not? \\n3) Can you elaborate on what a classical RL algorithm would look like that would serve as a proper benchmark to this algorithm?\\n4) In your mind, what is the SOTA algorithm for your setting?\\n5) What are some simple domains that your algorithm would apply to?\\n\\n[0]: Morav\\u010d\\u00edk, Matej & Schmid, Martin & Burch, Neil & Lis\\u00fd, Viliam & Morrill, Dustin & Bard, Nolan & Davis, Trevor & Waugh, Kevin & Johanson, Michael & Bowling, Michael. (2017). DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker. Science. 356. 10.1126/science.aam6960.\"}"
]
} |
ByxJjlHKwr | Learning Latent State Spaces for Planning through Reward Prediction | [
"Aaron Havens",
"Yi Ouyang",
"Prabhat Nagarajan",
"Yasuhiro Fujita"
] | Model-based reinforcement learning methods typically learn models for high-dimensional state spaces by aiming to reconstruct and predict the original observations. However, drawing inspiration from model-free reinforcement learning, we propose learning a latent dynamics model directly from rewards. In this work, we introduce a model-based planning framework which learns a latent reward prediction model and then plan in the latent state-space. The latent representation is learned exclusively from multi-step reward prediction which we show to be the only necessary information for successful planning. With this framework, we are able to benefit from the concise model-free representation, while still enjoying the data-efficiency of model-based algorithms. We demonstrate our framework in multi-pendulum and multi-cheetah environments where several pendulums or cheetahs are shown to the agent but only one of them produces rewards. In these environments, it is important for the agent to construct a concise latent representation to filter out irrelevant observations. We find that our method can successfully learn an accurate latent reward prediction model in the presence of the irrelevant information while existing model-based methods fail. Planning in the learned latent state-space shows strong performance and high sample efficiency over model-free and model-based baselines. | [
"Deep Reinforcement Learning",
"Representation Learning",
"Model Based Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=ByxJjlHKwr | https://openreview.net/forum?id=ByxJjlHKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5944Z8zbo4",
"SyeB0VlnsS",
"BJx_fxx2jS",
"rJl-kTyhoS",
"rJl62sy2jH",
"SJxkccvy5H",
"HJlQrTscYr",
"BJgKdT6JFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750364,
1573811404961,
1573810192500,
1573809368955,
1573809076576,
1571940999022,
1571630395266,
1570917745272
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2493/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2493/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2493/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2493/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose a model-based RL algorithm, consisting of learning a\\ndeterministic multi-step reward prediction model and a vanilla CEM-based MPC \\nactor. \\nIn contrast to prior work, the model does not attempt to learn from observations \\nnor is a value function learned. \\nThe approach is tested on task from the mujoco control suit. \\n \\nThe paper is below acceptance threshold. \\nIt is a variation on previous work form Hafner et al. \\nFurthermore, I think the approach is fundamentally limited: All the learning \\nderives from the immediate, dense reward signal, whereas the main challenges in RL \\nare found in sparse reward settings that require planning over long horizons, where value \\nfunctions or similar methods to assign credit over long time windows are \\nabsolutely essential.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General comment and updates\", \"comment\": \"We would like to thank the reviewers for their helpful comments and feedback. We were truly appreciative to see that the problem we are addressing is well-received with several comments on how to proceed further in this domain.\\n\\nWe have addressed the general concern of including Deepmdp results in figure 4 (1-cheetah and 5-cheetah), as it is a competitive baseline in this setting and may inform the effect of prediction horizon in our method (Deepmdp's 1-step vs our multi-step method). We have also made a slight improvement to the theoretical performance bound in the theorem statement and appendix. Both of these changes have been updated in the submission.\\n\\nThank you.\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thank you for your careful reviews,\\n\\n\\\" I believe it is not impactful if only looking at the dense reward setting. The type of environment they describe that requires using lossy representations is also likely to only have sparse reward, so to only note in the conclusion as future work is not enough\\\"\\n\\nBy focusing only on rewards, the sparse reward setting is certainly a weakness of our method compared with the state-reconstruction counterparts. Nevertheless, the reward signals collected by our method is exactly the same as any model-free algorithm. This means that our method might be able to tackle sparse reward problems by adopting some exploration ideas from model-free algorithms. Ideally the current method would still work given that the reward prediction horizon is sufficiently long to observe a non-zero reward, or else the representation may just collapse most states to the same latent state. There may be some trade off between state reconstruction and reward prediction necessary for a sense of \\\"observability\\\" in the finite horizon case.\\n\\n\\\" I think it is crucial to include DeepMDP, as it is the one most likely to perform competitively with the proposed method\\\"\\n\\nWe agree that including DeepMDP results for Half-cheetah is important, which has been updated. The additional results show that the DeepMDP performance is half of that of the reward prediction model with 1-cheetach, and the performance gap is further increased to more than 10 times in the 5-cheetach environment.\\n\\n\\\"SAC is off policy, and can therefore be evaluated with random data, rather than being used as an \\\"upper bound baseline\\\"\\n\\nThank you for this comment, will aim to more clearly describe the purpose of each figure. Since table 1 shows SAC being trained on 1e6 on-policy samples and other methods are being trained on only 2e4 off-policy samples, we are almost certain that the result would only be more favorable in comparison to train SAC off-policy. We chose to use SAC as a baseline here only to provide a reasonable reference of what state of the art performance would be on this task, since the multi-pendulum is a somewhat novel environment. \\n\\n\\\"DeepMDP is trained under a random policy, whereas the original paper utilizes samples collected by the policy as its trains\\\"\\n\\nWe acknowledge that in table 1 all algorithms except for SAC were trained offline, however figure 3 compares all algorithms in an on-policy setting. For this reason, we decided to keep Deepmdp consistent with other algorithms in Table 1. In the original DeepMDP paper, the \\u201cdonut world\\u201d experiments were performed similarly under exhaustive sampling of the state-space, not based on a policy.\\n\\n\\\"The final results for SAC also do not match the performance in Figure 3\\\"\\n\\nIn figure 3 i.e. the on-policy setting, SAC was trained with far fewer samples over 600 episodes which is only about 24000 samples compared to 1e6 samples used in table 1.\\n\\n\\\"Final performance should have standard-deviation\\\"\\n\\nThank you for pointing this out, this would make the results more informative . We will update the final plots to display shaded 1-standard deviation regions for all final performances.\\n\\nThank you\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for your careful reviews,\\n\\n\\\"The testing environments contain many distractor pendulums/cheetahs, which makes state reconstruction especially challenging. While this does seem to be the point the authors are trying to show, the environments are an extreme, almost artificial, case of difficult state reconstruction.\\\"\\n\\nIt is true that the experiments are intentionally designed to investigate and emphasize the desirable properties of the reward-prediction method. We agree that a more grounded example such as a vision-based grasping task in a cluttered environment would be very convincing and we should surely show this in the future.\\n\\n\\\"The results on images in the appendix seem to show a delta between the true and predicted reward, suggesting that the proposed method does not yet work on images. Why might this be the case?\\\"\\n\\nThank you for bringing this to attention. We did not sufficiently explain this result in context to the main results. Preliminary results show that the method works for images, but have not been thoroughly benchmarked yet. The figure shows a single open-loop prediction of reward in the pendulum environment from images. The open-loop prediction is not perfect and will accumulate error after about 20-steps, especially for predicting a stabilizing behavior. However, notice that the red line is the true reward and is stabilized under the MPC controller which is replanning based on the true observation at every time step, not open-loop.\\t\\n\\n\\\"From what I can see, the proposed method is very similar to the PlaNet algorithm with state reconstruction loss removed. Given the similarity, PlaNet should be included as a comparison in both the pendulum and cheetah environments. Similarly why was DeepMDP performance not shown in the Cheetah environment?\\\"\\n\\nWe agree that PlaNet has a similar framework for latent state-space learning. However, the latent model of PlaNet consists of both stochastic and deterministic components and a variational objective. These key differences make it difficult to have a fair comparison between PlaNet and the proposed method. The benefit of reward prediction loss over state reconstruction losses can be clearly observed in the experiments for the state model and the reward model.\\nFor DeepMDP, we agree that including DeepMDP results for multi-cheetah is important and we have updated the paper immediately. The additional results show that the DeepMDP performance is half of that of the reward prediction model with 1-cheetah, and the performance gap is further increased to more than 10 times in the 5-cheetah environment.\\n\\n\\\"One of the strengths of model based reinforcement learning is the ability to plan to reach unseen goals with a model trained via self-supervision or different goals. Does the proposed approach lose some of this, by overfitting to only the task reward?\\\" \\n\\nThis is an interesting point and we agree that there is potential work to be done to investigate on some kind of meta-learning task. You are correct that this reward prediction module would be specific to a particular task reward. However, for a similar task like reaching an unseen goal, the encoding and forward dynamics function can certainly be reused in planning while only the reward function requires to be re-learned. We think that it is promising to pose the multi-task setting as a proper meta-learning problem where we learn the representation over a task distribution. This time we chose to focus on the single task setting as a proof of concept.\\n\\nThank you.\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your careful reviews,\\n\\n\\\"In this paper, the authors assume deterministic transition and use deterministic function for latent transition. It seems to be the authors want to use MPC, which is a powerful planning algorithm. However, many RL tasks are modeled with stochastic transition. In stochastic transition cases, is the proposed algorithm still valid?\\\"\\n\\nWe agree that, in the future, an explicit representation of uncertainty and stochasticity is necessary for state-of-art application, although most benchmark tasks, including the ones considered in this paper, are purely deterministic. When it comes to stochastic environments, It is possible to extend the latent reward prediction model to include stochastic components. The cross-entropy method (CEM) used for MPC in this paper naturally extends to a stochastic setting.\\n\\n\\\"As shown in Figure 3, even the proposed method shows better performance than SAC in early episode but table 1 says that SAC shows the best convergence results in any number of pendulums except the single pendulum case. It seems to be different results from intuition, because the authors emphasize that the strength of the proposed method is efficiency of learning in RL tasks with irrelevant information. \\\"\\n\\nIn the current state of model-based RL, it is widely observed that model-free algorithms generally perform better in the limit of samples while our goal is to provide a sample-efficient model-based algorithm that scales well to high-dimensional observations with irrelevant information. Table 1 is meant to be an ablation study for the multi-pendulum environment, where SAC as a performance baseline. The other methods in Table 1 consumed only 1/50 of the samples as SAC. in Wang et al., 2019 [https://arxiv.org/abs/1907.02057]\\n\\n\\\"What objective is used to learn the latent model of the state-prediction model algorithm?\\\", \\\"Providing detailed experimental settings\\\"\\n\\nThank you for this clarifying question. Similar to the reward-only model, the state-prediction model has a multi-step mean squared error loss on full-observation prediction as well as reward prediction. We will add this information and formula along with architecture and algorithmic details to the appendix. \\n\\nThank you.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper proposes a novel algorithm for planning on specific domains through latent reward prediction. The proposed model uses an encoder to learn embedding the state to the latent state, a forward dynamics function to learn dynamical system in latent state space, and a reward function to estimate the reward given a latent state and an action. Using these functions, the authors define the objective using the mean-squared error between true and multi-step prediction of rewards. To justify the proposed method, the authors provide a theoretical analysis and experimental results on specific RL domains, multi-pendulum and multi-cheetah, which contain irrelevant aspects of the state.\", \"comments\": [\"This paper is well-written and easy to understand.\", \"In this paper, the authors assume deterministic transition and use deterministic function for latent transition. It seems to be the authors want to use MPC, which is a powerful planning algorithm. However, many RL tasks are modeled with stochastic transition. In stochastic transition cases, is the proposed algorithm still valid?\", \"As shown in Figure 3, even proposed method shows better performance than SAC in early episode but table 1 says that SAC shows the best convergence results in any number of pendulums except the single pendulum case. It seems to be different results from intuition, because the authors emphasize that the strength of the proposed method is efficiency of learning in RL tasks with irrelevant information.\"], \"questions_and_minor_comments\": [\"What objective is used to learn the latent model of the state-prediction model algorithm?\", \"Providing detailed experimental settings, like detailed settings for three deterministic feed-forward neural networks, and results such as consumed CPU time will help the comparison algorithms.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a technique for model based RL/planning with latent dynamics models, which learns the latent model only using reward prediction. This is in contrast to existing work which generally use a combination of reward prediction and state reconstruction to learn the latent model. The paper suggests that by removing the state reconstruction loss, the agent can learn to ignore irrelevant parts of the state, which should enable better performance in settings where state reconstruction is challenging.\\n\\nOverall the motivation for this work is good, and the idea is promising. Difficulty in reconstructing high dimensional states is a challenge for learning latent dynamics models. The paper is also very well written and easy to follow.\\n\\nMy concerns are centered around the experimental evaluation. Specifically, I see the following issues: (1) the experimental environments seem artificial, and hand tailored for this method, (2) given that the proposed method is a minor modification to the PlaNet paper, it seems that PlaNet should be included as a comparison (especially because it has been shown to work on high dimensional states), and (3) the proposed method seems very prone to overfitting to the given task, and there should be an analysis of how the proposed change affects generalization and robustness.\\n\\n(1): The testing environments contain many distractor pendulums/cheetahs, which makes state reconstruction especially challenging. While this does seem to be the point the authors are trying to show, the environments are an extreme, almost artificial, case of difficult state reconstruction. Would the same results hold in more realistic settings, for example, visual robot manipulation in a cluttered scene? Model based RL with video prediction models has been shown to work in such real cluttered robot manipulation environments. Showing that the proposed method can outperform such approaches in robot manipulation settings would be a powerful result. The results on images in the appendix seem to show a delta between the true and predicted reward, suggesting that the proposed method does not yet work on images. Why might this be the case?\\n\\n(2): From what I can see, the proposed method is very similar to the PlaNet algorithm with state reconstruction loss removed. Given the similarity, PlaNet should be included as a comparison in both the pendulum and cheetah environments. Similarly why was DeepMDP performance not shown in the Cheetah environment?\\n\\n(3): One of the strengths of model based reinforcement learning is the ability to plan to reach unseen goals with a model trained via self-supervision or different goals. Does the proposed approach lose some of this, by overfitting to only the task reward? I suspect that in generalizing to unseen tasks, a model trained with state prediction would potentially perform much better. If trained on many tasks, could this method achieve similar generalization? \\n\\nDue to some of these questions which remain unanswered by the experimental evaluation my current rating is Weak Reject. If the authors are able to clarify some of the questions above I may adjust my score.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper claims that one only needs a reward prediction model to learn a good latent representation for model-based reinforcement learning. They introduce a method that learns a latent dynamics model exclusively from multi-step reward prediction, then use MPC to plan directly in the latent space. They claim this is sample efficient in the model-based way, and is more useful than predicting full states. They learn a model that predicts only current and future rewards conditioned on action sequences, and that observation reconstruction is unnecessary to learn a good latent space. They provide planning performance guarantees for approximate latent reward prediction models.\\n\\nI tend to reject this work, because although I support the premise and believe it is very important, and like the style of experiments run with the use of distractors, I believe it is not impactful if only looking at the dense reward setting. The type of environment they describe that requires using lossy representations is also likely to only have sparse reward, so to only note in the conclusion as future work is not enough. The contributions consist only of learning a multi-step reward model for planning, and only provide results in two dense reward environments. In the second experiment with more difficult, high-dimensional observation and action space setting, two of the 3 baselines are left out, namely the state model and DeepMDP. I think it is crucial to include DeepMDP, as it is the one most likely to perform competitively with the proposed method. \\n\\nThe justification for Table 1 vs. Figure 3 are also very unclear, as to why SAC is trained with 10^6 samples while DeepMDP is trained under a random policy, whereas the original paper utilizes samples collected by the policy as its trains. SAC is off policy, and can therefore be evaluated with random data, rather than being used as an \\\"upper bound baseline\\\". The final evaluation performance in dashed line in Figure 3 also doesn't include standard deviation across the 5 seeds, which it should. The final results for SAC also do not match the performance in Figure 3, although it is hard to tell since the final performance in Table 1 is written in terms of number of environment steps while Figure 3 the axis is in terms of episodes. \\n\\nIncluding sparse reward experiments would vastly help support the claims in the paper, as well as including the DeepMDP results for HalfCheetah and additional explanation of the difference in performance of SAC in Table 1 and Figure 3.\"}"
]
} |
BkxA5lBFvH | Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents | [
"Jesse Zhang",
"Brian Cheung",
"Chelsea Finn",
"Dinesh Jayaraman",
"Sergey Levine"
] | We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure? This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation. While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier. These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\"{i}ve adaptation as well as learning from scratch. Building on this intuition, we propose risk-averse domain adaptation (RADA). RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics. Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model. We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes. We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning. | [
"safety",
"risk",
"uncertainty",
"adaptation"
] | Reject | https://openreview.net/pdf?id=BkxA5lBFvH | https://openreview.net/forum?id=BkxA5lBFvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6Xu15W5vMR",
"r1e59N5hjH",
"SJebd45noB",
"SklAMNc3jr",
"BkePIn1g5S",
"B1xrQKZ3Fr",
"HJltqtIeKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750332,
1573852305957,
1573852264914,
1573852181593,
1571974222832,
1571719453429,
1570953617471
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2491/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2491/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2491/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2491/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2491/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2491/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The work this paper presents is interesting, but it is not quite ready yet for publication at ICLR. Specifically, the motivation of particular choices could be better, such as summing over quantiles, as indicated by Reviewer 1. The inherent trade-off between safety and speed of adaptation and how this relates to the proposed method could also use a clearer exposition.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"New robust RL baseline, epistemic uncertainty analysis, effect of finite pretraining environments, and small fixes\", \"comment\": \"Thank you for these reviews. In responding to them, we have been able to significantly improve our submission. Specifically, we have made the following changes to the draft:\\n(i) New Robust RL baseline, as suggested by R3, Robust Adversarial Reinforcement Learning (RARL). This is now included throughout experiments, so that Fig 2, 3, and 4 are all updated. RADA consistently outperforms RARL on all metrics, safety, performance, and learning speed.\\n(ii) At the end of Sec 5, we have now added a section analyzing and showcasing the ability of RADA\\u2019s pretrained dynamics models to represent epistemic uncertainty due to not knowing z, in response to R1's question. Fig 5 provides visualizations.\\n(iii) We have added Appendix B that further analyzes how the dynamics model predictions evolve over pretraining time until it correctly models the epistemic uncertainty as described above in (ii).\\n(iv) We have added an appendix A describing new experiments showing the effect of using only a finite number of pretraining environments, rather than sampling from an infinite set at the beginning of each episode, in response to R1. RADA still works well with only about 10 pretraining environments.\\n(v) We have fixed minor typos and portions of text that reviewers pointed out as being unclear.\\n\\nWe have posted responses to individual reviewers about the points they raised.\"}",
"{\"title\": \"New robust RL baseline, more metalearning experiments, and clarifications\", \"comment\": \"Thank you for your extensive comments.\\n\\n=== \\u201cCompare against robust RL.\\u201d === \\nThank you for this suggestion. We have now included a new baseline: Pinto et al, Robust Adversarial Reinforcement Learning, 2017 (\\u201cRARL\\u201d), per your suggestion. Specifically, we train RARL with an adversarial agent that can perturb the motor torques. We pretrain RARL for about 30x as many episodes as RADA (necessary for the model-free approach RARL employs), and evaluate adaptation to new environments similar to our approach. The results are now included in Fig 2, 3, and 4 in the paper. They show that RARL does indeed induce robustness during policy transfer. However, in our experiments, RARL adapts more slowly, yields worse rewards, and leads to more collisions than RADA. Please see Sec 5 and Figs 2, 3, and 4 for more details.\\n\\n=== \\u201cWhy meta-learning baselines do not work?\\u201d === \\nWe have tried RL^2, MOLe, and GrBAL (a model-based variant of MAML). Unfortunately, none of these methods work well in our setting. In particular, training these methods has proven extremely unstable in our environment. Following the reviewer's suggestions, we have run experiments to analyze why metalearning fails --- our hypothesis was that it failed due to the large range of training environments. In our new expeirments, we decreased the range of pretraining car widths (from 0.05-0.099 in the paper to 0.05-0.06) in an attempt to stabilize metalearning. We tried training RL^2 and GrBAL once more with reasonable hyperparameter search around the authors\\u2019 code defaults. Despite this, neither metalearning approach was able to successfully train. We have added a note on this in the paper.\\n\\n=== \\u201cThe paper claims fast and safe adaptation, but isn\\u2019t fast and safe impossible?\\u201d === \\nThere is indeed a tradeoff between how safe an agent is, and how fast it can hope to adapt. However, while standard RL and meta-RL approaches do not consider safety at all and therefore provide no ability to trade off safety for speed and vice versa, RADA provides an intuitive way to do this by setting a caution parameter (gamma in Eq 2). It then aims to provide pareto-efficient solutions that pay attention to both safety and adaptation speed, with gamma controlling where on the pareto-frontier the solution is.\\n\\n===\\u201cI am confused about the sentence \\u2018Since dynamics models do not need any manually specified reward function during training, the ensemble model can continue to be trained in the same ways as during the pretraining phase.\\u2019 Without reward, what\\u2019s the purpose of RL?\\u201d === \\nWe employ a model-based planning approach to RL, involving two steps: (i) a dynamics model is trained that predicts future states given current states and actions, and (ii) then, an action is selected not through a learned policy, but instead by optimizing for actions that produce the most desirable states as predicted by the learned dynamics model. So, the dynamics model is the only component that is learned, and it is task-agnostic and requires no rewards. This is convenient in our setting, since the dynamics models can be trained even in the unseen test environment, where no rewards are provided. RADA exploits this.\\n\\n=== \\u201cWhat is the relationship between the generalized action score and the risk of catastrophic failure?\\u201d === \\nEq 2 defines the generalized action score. This score includes a caution parameter which controls the degree of pessimism with which an action sequence is evaluated during planning with the learned model. For instance, when caution gamma is 50, the generalized action score of an action sequence is the average score of the bottom half of the particles propagated through the model. This would capture any catastrophic failures resulting from those actions, at the cost of ignoring the most successful trajectories that yielded highest reward. As gamma increases, the failures are weighted more relative to the successes. This means that during planning, actions that have even a minor risk of failure are assigned a low generalized action score, and therefore avoided. The generalized action score thus allows control over the degree to which catastrophic failure is avoided.\\n\\n\\u201csum_N\\u201d -> \\u201csum_i\\u201d Thank you, we have fixed this and other minor typos now.\"}",
"{\"title\": \"Epistemic uncertainty analysis, sampling z from a finite set for each pretraining episode, and additional clarifications\", \"comment\": \"Thank you for this thoughtful review.\\n\\n=== \\u201cWhat do the uncertainty estimates really mean? Do they really capture epistemic uncertainty due to not knowing z?\\u201d === \\nThank you. We have now added visualizations in Sec 5 (particularly Fig 5) that show the predicted trajectories from our model for a fixed action sequence, and show how it captures the various possible behaviors among car widths encountered in the training data. This provides an empirical validation that the model is indeed able to capture the uncertainty due to unknown car width z. We also include an Appendix B (Fig 7) showing how the model predictions improve during pretraining time until it converges to approximately correctly model the epistemic uncertainty.\\n\\n=== \\u201cz different for each episode at training?\\u201d ===\\nYes, for the results reported in the paper, we did sample z uniformly at random over the training distribution at the beginning of each episode. We have now added additional results in an appendix showing how RADA performance evolves as a function of the number of available pretraining environments. In particular, we sample a fixed number (2/5/10) of car widths before pretraining and sample uniformly from those during pretraining. Our results indicate that there are significant gains in performance (both reward as well as collision safety) from 2 to 5 to 10. At 10 fixed car widths, results are similar to those originally reported in the paper. \\n\\n=== The review points out that while we propose RADA for safe adaptation to new domains, it still builds on probabilistic models that were learned on training domains, which might perform poorly in unseen domains. === \\nRADA incorporates an inductive bias for \\u201ccaution\\u201d: when dropped into a new environment, a RADA agent starts acting as though the environment is at least as difficult as the most difficult environments it has been trained on. Specifically, it makes the reasonable assumption that actions that rarely caused bad outcomes in training environments are also unlikely to cause bad outcomes in the unseen environments, and selects them. The intuition is that while the new environments are indeed outside the support of what our models could have learned from training environments, they are still within the support of this cautious inductive bias which is built into RADA. Our empirical results establish that RADA does generalize safely to held-out environments.\", \"this_built_in_bias_does_not_come_for_free\": \"in environments that are easier than a RADA agent\\u2019s training environments (such as smaller car widths in our setting), it would be overly cautious during adaptation to a new environment and thus take longer to reach an optimal policy.\\n\\n===\\u201cWhy sum of 0th through kth quantile, rather than just the k^th quantile\\u201d? === \\nGood question, we made this choice so that the RADA would be a strict generalization of the PETS objective: at gamma=0, the RADA objective in Eq 2 exactly matches the PETS objective of Eq 1.\\n\\n=== \\u201cWhy average maximum reward? If the return accurately captures all desiderata, why do we count the number of collisions (failures)?\\u201d ===\", \"we_care_about_two_things\": \"(i) quickly reaching good performance in the target environment, and (ii) safety during the adaptation process. To capture these two desiderata, we report both the average max reward (consistent with Chua et al 2018, which we build on), and the cumulative number of collisions during adaptation. In our experimental setting, collisions correspond to catastrophic states that we can't recover from, and which we aim for RADA to learn to avoid. While return and collisions are closely related in our environment, they do not capture exactly the same thing. In particular, collisions lead to low reward, but low reward does not always mean that a collision occurred. Instead, it might also be due to the agent steering left rather than right, or going about in circles, for example. So it is worthwhile to measure collisions separately to evaluate how safe adaptation is.\\n\\n=== \\u201cKeep the model close to the original model. How?\\u201d === \\nYes, we meant that RADA does this by using past experience in training environments during the finetuning stage in the target environment. We have improved the exposition now to clarify this.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to adapt RL agents from some set of training environments (which, in the current instantiation, vary in some simple respect) to a new domain. They build on a framework for model-based RL called PETS.\", \"the_approach_goes_as_follows\": \"2-step process\\n * train probabilistic model-based RL agents in a \\u201cpopulation of source domains\\u201d\\n * dropped into new environment use \\u201cpessimistic exploration policy\\u201d\\n\\nThen at test time, in order to compute estimates for the rewards for each action the authors use a \\u201cparticle propagation\\u201d technique for unrolling through their dynamics model .\\n\\nThe action is chosen by looking at the sum of the 0 through kth percentile rewards. \\nThis is a weird choice. Why are they looking at a sum over quantiles vs a quantile itself?\\n\\nThe claim is that the models from the first stage capture the epistemic uncertainty due to not knowing z.\\nHowever, the authors give a too scant a treatment of what these uncertainty estimates really mean.\\nFor example, they appear to only be valid with respect to an assumed distribution over z.\\nThe paper\\u2019s experiments however focus in large part on what happens when the model is evaluated \\non values of z that were outside the support of the distribution over training domains. \\nIn this case, any benefit appears to be ill explained by the underlying motivation.\\n\\n\\nThe next step here is to finetune the model as data is collected on the new domain.\\n\\nAuthors propose heuristics for this finetuning that include\\n1. Drawing experiences from the past experiences (under different domains) and \\n2. \\u201ckeeping the model close to the original model\\u201d, via some sort of regularization presumably.\\n\\n>>> \\twhy isn\\u2019t the exact nature of how they \\u201ckeep the model near the original model explained in the text?\\n\\tperhaps the authors mean that 1. and 2. are one and the same (1 as means to achieve 2)\\n\\tif this is the case, then the exposition should be improved to make this more clear.\\n\\n\\nSome important details appear to be missing. For example, how many distinct source domains are seen during pretraining? Do they set z different z for every single episode of pretraining? Some language here is unclear, for example what precisely does an \\u201citeration\\u201d mean in the context of the experiments? \\n\\nThe choice to report \\u201caverage maximum reward\\u201d seems strange if what the authors care about is avoiding risk. Can they explain/justify this choice or if not, present a much more comprehensive set of experimental results?\\n\\nThe figures tracking catastrophic failures vs performance resembles those in \\n\\u201cCombating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear\\u201d https://arxiv.org/abs/1611.01211\\nThis raises some question about why they don\\u2019t if concerned with \\u201ccatastrophic events\\u201d model them more explicitly. \\nElse, if the return accurately captures all desiderata, why to we need to count the failures?\\n\\nIn short this is a simpzle empirical paper that makes use of heuristic uncertainty estimates, \\nincluding in settings when the estimates have no validity. The writing is reasonably clear\\nand the ideas are straightforward (which is perfectly fine!). A few of the decisions are unnatural,\\na few are ad hoc, and a few details are missing. Overall my sense is that this paper \\nhas some good qualitities, including the clarity of much of the exposition, \\nbut it\\u2019s still below the mark to be an impactful ICLR paper. \\n\\n==========UPDATE=================\\nI read the rebuttal and am glad that the authors took time to read my review and engage with the criticism as well as try to make some small improvements to the paper, especially exploring the impact on the number of training environments on the results (in the original paper the number of environments available at train time was unlimited). The answers to some of the other questions were less convincing. E.g. the seemingly incoherent objective of summing over the quantiles falls flat. Why should we care more about being a \\\"strict generalization\\\" of some previous algorithm built upon than of having a coherent objective? Overall, I don't think the paper makes it over the bar to accept but I hope the authors continue to improve upon the work and get it into shape where it could be accepted at another strong conference.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper tries to address the safe adaptation: given a model trained on a variety of past experiences for some task, train a model learning to perform that task in a new situation while avoiding catastrophic failure.\", \"pros\": \"- The idea of training on data from varying quartiles, with the goal of preventing overly-conservative models, is quite intriguing and inspiring.\\n\\nCons & Question:\\n- Motivation: \\nCautious exploration or optimizing the best worst-case performance is conflicting with the philosophy of exploration, such as UCB. As stated in the introduction, \\u201cenables fast yet safe adaptation within only a handful of episodes.\\u201d Intuitively, we can not expect to be safe and fast at the same time. It would be better to discuss why cautious exploration can ensure fast and safe adaption, which would be more interesting. Additionally, in Figure 3, some fast adaption methods, such as MAML, should be compared to be more persuasive.\\n\\n- Method:\\n 1. In equation (1), sum_N \\u2014> sum_i.\\n 2. This work formulated safe adaption as minimizing the risk of catastrophic failure. What\\u2019s the relationship between \\u201cthe generalized action score\\u201d and \\u201crisk of catastrophic failure\\u201d? The \\u201cgeneralized action score\\u201d is the main difference with PETs. However, it is a little bit hard to follow the idea from \\u201crisk of catastrophic failure\\u201d to \\u201cthe generalized action scores\\u201d. \\n 3. \\u201cModel-based RL agents contain dynamics models that can be trained in the absence of any rewards or supervision.\\u201d \\n \\u201dSince dynamics models do not need any manually specified reward function during training, the ensemble model can continue to be trained in the same way as during the pretraining phase.\\u201d I am confused about these sentences. Without the reward, what\\u2019s the purpose of RL? \\n\\n\\n- Experiments:\\n1. As stated in experiments, three meta-learning approaches have been deployed as baselines, including GrBal, RL^2 and MOLe. However, the experimental results are missing. Why meta-learning baselines do not work? Are there any explanations?\\n2. There are many robust RL baselines, such as \\n [1] Pinto L, Davidson J, Sukthankar R, et al. Robust adversarial reinforcement learning[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 2817-2826. \\nIt would be better to compare with robust reinforcement learning work since there are no other baselines apart from meta-learning methods.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the problem of safe adaptation to avoid catastrophic failure in a new environment. It draws intuition from human behavior. The proposed method (risk-averse domain adaptation (RADA)) learns probabilistic model-based RL agents from source domains, and uses them to select actions that has the best worst-case performance in the target domain.\\n\\nThe paper mentions safety-critical applications like auto-driving. However, generally, I don't think black-box models are suitable for these safety-critical applications.\"}"
]
} |
rJl0ceBtDH | Semi-Supervised Boosting via Self Labelling | [
"Akul Goyal",
"Yang Liu"
] | Attention to semi-supervised learning grows in machine learning as the price to expertly label data increases. Like most previous works in the area, we focus on improving an algorithm's ability to discover the inherent property of the entire dataset from a few expertly labelled samples. In this paper we introduce Boosting via Self Labelling (BSL), a solution to semi-supervised boosting when there is only limited access to labelled instances. Our goal is to learn a classifier that is trained on a data set that is generated by combining the generalization of different algorithms which have been trained with a limited amount of supervised training samples. Our method builds upon a combination of several different components. First, an inference aided ensemble algorithm developed on a set of weak classifiers will offer the initial noisy labels. Second, an agreement based estimation approach will return the average error rates of the noisy labels. Third and finally, a noise-resistant boosting algorithm will train over the noisy labels and their error rates to describe the underlying structure as closely as possible. We provide both analytical justifications and experimental results to back the performance of our model. Based on several benchmark datasets, our results demonstrate that BSL is able to outperform state-of-the-art semi-supervised methods consistently, achieving over 90% test accuracy with only 10% of the data being labelled. | [
"semi-supervised learning",
"boosting",
"noise-resistant"
] | Reject | https://openreview.net/pdf?id=rJl0ceBtDH | https://openreview.net/forum?id=rJl0ceBtDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6hMYK3CMlf",
"S1lx1ydnsS",
"SJl5VtlVsS",
"HyguTu9msB",
"SylvK_qmiH",
"HJx8lu9QjS",
"rkluCySRYr",
"HJlClhzCtS",
"BJlqncT3KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750304,
1573842648503,
1573288241703,
1573263552148,
1573263487137,
1573263341990,
1571864527749,
1571855349707,
1571769010041
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2490/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2490/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2490/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2490/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2490/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2490/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2490/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2490/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a new semi-supervised boosting approach.\\n\\nAs reviewers pointed out and AC acknowledge, the paper is not ready to publish in various aspects: (a) limited novelty/contribution, (b) reproducibility issue and (c) arguable assumptions.\\n\\nHence, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"RE: Author response\", \"comment\": \"I have read the other reviews and authors' responses. They do not change my view of the paper.\"}",
"{\"title\": \"still not reproducible\", \"comment\": \"Thank the authors for clarifying. But even when taking the clarification into account, I'd still be shocked if any educated researcher can reproduce the experiments given the details in the paper/comments. I definitely suggest the authors to put more focus on reproducibility. Thanks.\"}",
"{\"title\": \"Addressing Some Concerns Over Error Rate\", \"comment\": \"The parameters on the model could have been explained better. The 20 experiments was running the experiment 20 times with a random split occurring every time. Variance was something that could have helped add, but is not extremely important.\\n\\nThe point of the estimated error and actual error was that they were very close. You cannot get an exact estimate.\"}",
"{\"title\": \"Clarification of How we conducted the Experiments\", \"comment\": \"Turning the linear regression into binary case was simply putting a limit whether the price was over a certain limit or not.\\n\\nThe parameters for many of the classifiers were based on the original values that were given based on the scikit learn models.\"}",
"{\"title\": \"Clarification of Our Novelty\", \"comment\": \"The novelty in this paper lies in a couple different avenues. Overall this paper is implemented as a framework to help increase the overall prediction accuracy on semi-supervised data. Beyond the framework, the paper produced a novel approach of applying the Natarajan et al. 2013 loss function to a set of supervised learning algorithms under a crowdsourcing environment. The theoretical proof and implementation of this function are shown to perform within the experimental section compared to other semi-supervised algorithms.\\n\\nWe evaluated our paper on the same datasets that were used in the 2013 paper.\\n\\nMultiple works that achieve 99 percent accuracy when all training data known are not something that would be interesting to put in the paper as our semi-supervised methods would not compare very well.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposed a method combining boosting with semi-supervised learning to handle classification problems when only partial data points have labels available. The method first trains a classifier with the true labels, and predicts labels for unlabeled data (with some error rate), then a bias boosting method is applied on the larger dataset to construct the final classifier. I find the topic interesting but I'm concerned about the novelty level of the paper. Here some further comments.\\n\\n1. Theorem 1 seems interesting and it will form a strong result if the assumption \\\\rhp_{+{ = \\\\rho_{-} is removed.\\n\\n2. Lemma 1 \\\"in practice ...\\\", how to balance the dataset to make sure the two class have similar size? The labeled data can be tailored to ensure this, but one cannot make it happen for the unlabeled data.\\n\\n3. In the experiments, UCI datasets seem not comprehensive to demonstrate the advantage of the proposed method. More datasets with higher volume could be better. Also, how is the result compared to the case when all the training data labels are known? What is the gap like?\\n\\n4. Some typos and writing issues, like equation (6) unbalanced brackets.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a new semi-supervised boosting approach. The approach takes a set of supervised learning algorithms to simulate \\\"crowd-source\\\" labels of the unlabeled data, which are then used to generate a noisy label per unlabeled instance. The noise level is then estimated with an agreement-based scheme, and fed to a modified AdaBoost algorithm that is more noise-tolerant given the noise level. Some theoretical guarantee of the modified AdaBoost algorithm is derived and promising experiment results are demonstrated.\\n\\nMy suggestion is to reject the paper, with the following key reasons.\\n\\n(1) Contribution is insufficient, or perhaps not well-highlighted. For the three pieces of contribution, Section 4.1 (self-labeling, which is highlighted within the title) seems to be a trivial borrowing of an existing idea in crowd sourcing from 1979. It is not clear whether Section 4.2 (error estimation) is an original contribution or not, but even if it is original Lemma 1 seems marginally trivial. Section 3 (noise-resistant AdaBoost) plugs a known surrogate loss for noisy labels into AdaBoost. But despite the ugly math, the results seem to be equivalent to a heuristically-shrunk alpha_t for AdaBoost. None of the pieces seem to make a solid contribution to the problem of interest.\\n\\n(2) Assumptions are not reasonable. Section 3 and Sections 4.2 both rely on \\\"homogeneous error rates\\\" which does not seem to be the case when the noise is generated from classifier-target mismatch. In particular, the noisy will only happen in mismatch areas, and not happen in other areas, making it non-homogeneous. The authors did not discuss the rationality of this assumption and/or how it affects the designed approach. In Section 4.2, there is another assumption that \\\"in practice we can balance the dataset\\\", which might be true for the labeled part through sampling, but not necessarily true for the unlabeled part. So it is not clear whether this assumption can be met. Section 4.2 also assumes that \\\"the probability can be estimated through the data\\\" but did not mention how large the data needs to be for an accurate estimation.\\n\\n(3) Experiments cannot be easily replicated. To begin with, the authors claim to use 10 classifiers from scikit-learn as the initial labeler, but the exact 10 (including parameters) are not pinged down. In the data sets, there is a procedure \\\"or turned linear regression datasets into binary labels\\\" that does not seem sufficiently clear for replication. It is not clear whether \\\"feature normalization\\\" considers only the training set or the whole training+test set.\\n\\nHaving said that, there are some other suggestions:\\n\\n(4) Writing needs improvement. Many of the parts contains unnecessarily ugly math notations without motivation. Even the core Section 3 looks like a LaTeX math demo than a clear illustration of scientific ideas.\\n\\n(5) It is not clear what the importance of Theorem 1 is. There doesn't seem to be a guarantee of gamma_t > 0 given the authors' definition of hat{epsilon}_t (worse case error of the two classes), and then the first part of Theorem 1 is not fast decreasing. It is not clear whether the N in the second term is N_noisy. In any case, the theorem is not clearly described enough to help understand the contribution of the paper.\\n\\n(6) A baseline that should be considered is to treat the noisy labels as \\\"soft labels\\\" and then apply confidence-based boosting.\\n\\nImproved Boosting Algorithms Using Confidence-rated Predictions, Schapire and Singer 1999.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors present an approach for semi-supervised learning which combines noisy labels with boosting. In a first step, the labeled instances are used to train a set of classifiers, and these are used to create noisy labels for the unlabeled instances. Then, an EM procedure is used to estimate the noise level of each instance. Finally, a version of AdaBoost which accounts for instance noise levels is proposed to create a final classifier. A limited set of experiments suggests the proposed approach is competitive with existing approaches.\\n\\nMajor Comments\\n\\nAs a non-expert in this area, I had trouble identifying the novel contributions of this work. For example, many of the results in Section 3 (noise-resistant AdaBoost) seem to replicate, or follow closely, the results of [Natarajan et al., 2013]. Similarly, using EM to assign pseudo-labels has been extensively studied in the literature [Lee, WREPL 2013; Chapelle and Zien, AISTATS 2005; Kang et al., ECCV 2018; Rottman et al., ICMLA 2018]. \\n\\n\\nThe experiments are very poorly described, so it is difficult to gauge if they are valid:\\n\\nMost importantly, the authors point out that the estimated error rates do not always match the actual error rates. Since this seems to be one of the most important factors of the proposed approach, further investigation should be performed to answer questions like: why is the error rate not estimated well? on what type of datasets? can more/better supervised learners help? In some cases (Diabetes, Thyroid, Heart), the actual noise rate increases with more labeled samples. What does that mean?\\n\\nSecond, the proposed approach seems to have a number of important hyperparameters, including the number of supervised models trained and their hyperparameters, the parameters of the Beta distribution used as a prior on the noise estimation, and the hyperparameters of the AdaBoost algorithm. Likewise, all of the competing algorithms also have hyperparameters which are known to affect performance (e.g., learning rate for NNs). The paper does not mention how (or if) a validation set was used to select these.\\n\\nThird, while the caption of Table 1 mentions that 20 trials were used, it is not clear if this was some sort of k-fold cross validation, Monte Carlo, cross validation, the same splits but with different random seeds, etc. Additionally, the variance across the different trials should be given; otherwise, it is not possible to tell if any of the empirical results are significant.\", \"minor_comments\": \"The references are not consistently formatted.\\n\\nThis paper is very notation heavy. It would be helpful to include a \\u201ctable of symbols\\u201d for the reader in an appendix.\\n\\nAdditionally, the notation in the paper is not consistent. For example, both \\u201c$M$\\u201d and \\u201c$\\\\mathcal{M}$ are used to indicate the number of models trained on the labeled data. Later on, \\u201c$\\\\mathcal{M}$\\u201d is also used to refer to the set of trained classifiers. The first bullet point in Step 3 of the pseudocode seems to suggest that each classifier is trained on a single labeled data point. The equation at the bottom of Page 5 used \\\\theta, but it does not seem to be defined.\\n\\nThe discussion on experts, spammers, and adversaries could be helpful if this terminology were used throughout the paper; however, it is used in only one paragraph. \\n\\nThe main body of the paper should mention that proofs are given in the appendices.\\n\\nFor context, if may be helpful to mention that graph convolutional networks and other representation learning techniques are commonly used for semi-supervised learning (e.g., [Kipf and Welling, ICML 2016]). Those approaches are quite different (and lack any sort of theoretical guarantees, for the most part), though, so empirical comparisons may not be so meaningful.\\n\\nIt would be helpful to give a sentence or two on the intuition behind what the proofs are showing. For a non-expert, they are very difficult to follow.\\n\\nDo the various proofs still hold when the datasets are artificially balanced (with respect to the last paragraph in Section 4)?\\n\\nIt would be helpful to include the performance using the complete labeled dataset for comparison.\\n\\nStratified sampling could be used to ensure both classes are present in the training data. Also, \\u201c0.99%\\u201d -> \\u201c99%\\u201d.\\n\\nBesides accuracy, some measure like AuROC or the F1 score which account for class imbalance should be given.\\n\\nTypos, etc.\\n\\n\\u201cLogitboost tested against\\u201d -> \\u201cLogitboost were tested against\\u201d\\n\\n\\n\\u201ctherefore is not\\u201d -> \\u201ctherefore, having a lot of labeled data is not\\u201d\"}"
]
} |
BygacxrFwS | Fractional Graph Convolutional Networks (FGCN) for Semi-Supervised Learning | [
"Yuzhou Chen",
"Yulia R. Gel",
"Konstantin Avrachenkov"
] | Due to high utility in many applications, from social networks to blockchain to power grids, deep learning on non-Euclidean objects such as graphs and manifolds continues to gain an ever increasing interest. Most currently available techniques are based on the idea of performing a convolution operation in the spectral domain with a suitably chosen nonlinear trainable filter and then approximating the filter with finite order polynomials. However, such polynomial approximation approaches tend to be both non-robust to changes in the graph structure and to capture primarily the global graph topology. In this paper we propose a new Fractional Generalized Graph Convolutional Networks (FGCN) method for semi-supervised learning, which casts the L\'evy Fights into random walks on graphs and, as a result, allows to more accurately account for the intrinsic graph topology and to substantially improve classification performance, especially for heterogeneous graphs. | [
"convolutional networks",
"node classification",
"Levy flight",
"graph-based semi-supervised learning",
"local graph topology"
] | Reject | https://openreview.net/pdf?id=BygacxrFwS | https://openreview.net/forum?id=BygacxrFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"arrQRX9dsx",
"SyxdoagL5B",
"ByeG3YPpFB",
"Bke3S1BTYS",
"SyxTSA1JOH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798750276,
1572371871870,
1571809705582,
1571798852142,
1569812037008
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2489/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2489/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2489/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2489/Authors"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a fractional graph convolutional networks for semi-supervised learning, using a classification function repurposed from previous work, as well as parallelization and weighted combinations of pooling function. This leads to good results on several tasks.\\nReviewers had concerns about the part played by each piece, the lack of comparison to recent related work, and asked for better explanation of the rationale of the method and more experimental details. Authors provided explanations and details, and a more thorough set of comparison to other work, showing better performance in some but not all cases.\\nHowever, concerns that the proposed innovations are too incremental remain.\\nTherefore, we cannot recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper provides a new model for semi-supervised node classification in directed and undirected graphs. It is based on a novel fractional filter for graph conv networks, which generalizes several previously employed graph semi-supervised learning frameworks, by introducing a fractional hyperparameter (sigma in the paper), using fractional powers of the Laplace operator.\\n\\nThe relevant previous work in the area seems to be cited, and the paper appropriately embedded in the previous work\\n\\nEmpirically, the method outperforms several established baseline models (classical and neural) on standard datasets in the node classification task. in particular one with a low number of labeled nodes. A sensitivity analysis is performed to assess the impact of the hyperparameters of the FGCN. \\n\\nI believe the experimental results could justify an accept, but I would not claim I am an expert in semi-supervised learning on graphs.\", \"questions\": \"How can the architecture be extended to handle edge types?\\n\\nHow were the hyperparameters of the baselines tuned?\", \"small_things\": \"Could you provide more context around the reliability in parallel systems (eq 11)? It is not clear how this relates to the rest of the paper.\\n\\nPlease add a citation for gated max average pooling.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a fractional graph convolutional networks for semi-supervised learning. The proposed method used a classification function of a fractional graph semi-supervised learning (GSSL) [De Nigris et al., 2017] as a graph filter. In addition, the authors adopt a parallel system and weighted combination of max and average pool. Experimental results show that the proposed method (FGCN) shows the best accuracy compared to other recent graph-based neural networks for all datasets except one.\\n\\nThe key approach of the proposed method is to apply a classification function (equation (3)) obtained by solving a GSSL problem to graph convolutional networks. However, this idea is too incremental and applying the classification function to graph filter is very trivial. This works also combines the fractional GSSL with a parallel system and weighted pool. But, it is not clear which contribution actually improves the results. Moreover, the intuition of the fractional approach is not clear too, e.g., how the optimization (equation (2)) is derived?, and some explanations are unnatural to demonstrate the methodology, e.g., equation (4). For these reasons, this paper is under the bar of acceptance.\", \"main_concerns\": \"1. What is the intuition of the optimization of GSSL? How is it obtained? And among all fractional methods (e.g., SL, NL, and PR), which one doe achieve the best performance?\\n\\n2. The FGS filter in equation (7) is the sum of infinite terms. However, in practical, it is impossible to compute the infinite terms. Does this approximate the sum of finite terms? If does, what is the number of truncation?\\n\\n3. The authors mention that they establish a theoretical guarantee of the parallel system. But, I could not find any theoretical results. It would be better to include the analysis in the paper.\", \"minor_concerns\": \"1. In page 3, please edit \\u201cforulation\\u201d -> \\u201cformulation\\u201d\\n2. In equation (8), I think \\u201cX+\\\\alpha \\\\tilde{L} X\\u201d should change to \\u201c\\\\tilde{L} X\\u201d\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a fractional generalized graph convolutional networks for semi-supervised learning. The authors design a new graph convolutional filter based on Levy Flights, and propose new feature propagation rules on graphs. Experimental results on multiple graph datasets are reported and discussed.\\n\\nPros.\\n1. This paper presents a nice overview of three popular semi-supervised learning methods in Section 3.1, and presents insights regarding these models.\\n2. A new graph convolutional filter is proposed. The motivation is clear, and the technical details are easy to follow.\\n3. Experiments on five benchmark datasets are conducted. Both undirected and directed graphs are used in experiments.\\n\\nCons.\\n1. The proposed method contains three major components: parallel FGS convolution, pooling, and residual block. Although some justifications are provided for such designs, it is difficult to justify the role of each component. In other words, it is unclear whether the performance gain is from the parallel structure, or the residual block. What would be the model performance without parallel structures? Also, given that FGCN has quite a few layers, the motivation of using residual blocks should be carefully justified. \\n2. Many recent methods on graph neural networks are not discussed or included as baselines, such as [a-b].\\n[a] GMNN: Graph Markov Neural Networks, ICML 2019\\n[b] Large-Scale Learnable Graph Convolutional Networks, KDD 2018\\n[c] SPAGAN: Shortest Path Graph Attention Network, IJCAI 2019 \\n\\n\\n-------------------------------------------------\\nThe response from authors addressed many of my concerns. The rating has been updated.\"}",
"{\"comment\": \"There is a typo (method name) in our paper, we correct it - it should be L\\\\'evy Flights rather than L\\\\'evy Fights. Besides, we also correct some other typos in \\\"Training setting details\\\" part. Here is a dropbox link for the corrected version and code: https://www.dropbox.com/sh/ajtz6inf677nkcv/AACXkFRZjRrCkxYkxJDNfks0a?dl=0.\\n\\nThanks!\", \"title\": \"Some Corrections\"}"
]
} |
B1lTqgSFDH | Antifragile and Robust Heteroscedastic Bayesian Optimisation | [
"Ryan Rhys-Griffiths",
"Miguel Garcia-Ortegon",
"Alexander A. Aldrick",
"Alpha A. Lee"
] | Bayesian Optimisation is an important decision-making tool for high-stakes applications in drug discovery and materials design. An oft-overlooked modelling consideration however is the representation of input-dependent or heteroscedastic aleatoric uncertainty. The cost of misrepresenting this uncertainty as being homoscedastic could be high in drug discovery applications where neglecting heteroscedasticity in high throughput virtual screening could lead to a failed drug discovery program. In this paper, we propose a heteroscedastic Bayesian Optimisation scheme which both represents and optimises aleatoric noise in the suggestions. We consider cases such as drug discovery where we would like to minimise or be robust to aleatoric uncertainty but also applications such as materials discovery where it may be beneficial to maximise or be antifragile to aleatoric uncertainty. Our scheme features a heteroscedastic Gaussian Process (GP) as the surrogate model in conjunction with two acquisition heuristics. First, we extend the augmented expected improvement (AEI) heuristic to the heteroscedastic setting and second, we introduce a new acquisition function, aleatoric-penalised expected improvement (ANPEI) based on a simple scalarisation of the performance and noise objective. Both methods are capable of penalising or promoting aleatoric noise in the suggestions and yield improved performance relative to a naive implementation of homoscedastic Bayesian Optimisation on toy problems as well as a real-world optimisation problem. | [
"Bayesian Optimisation",
"Gaussian Processeses",
"Heteroscedasticity"
] | Reject | https://openreview.net/pdf?id=B1lTqgSFDH | https://openreview.net/forum?id=B1lTqgSFDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KO_Iv7zfaH",
"Hkgy21W3ir",
"rJghy_26Kr",
"SJeQfDipYr",
"SylAl7jVKH"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750248,
1573814183498,
1571829732321,
1571825419342,
1571234550026
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2488/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2488/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2488/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2488/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers initially gave scores of 1,1,3 citing primarily weak empirical results and a lack of theoretical justification. The experiments are presented on synthetic examples, which is a great start but the reviewers found that this doesn't give strong enough evidence that the methods developed in the paper would work well in practice. The authors did not submit an author response to the reviewers and as such the scores did not change during discussion. This paper would be significantly strengthened with the addition of experiments on actual problems e.g. related to drug discovery which is the motivation in the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Response to Reviewers\", \"comment\": \"We thank the reviewers for their helpful comments and feedback. We will endeavour to incorporate the suggestions into a more comprehensive and extended work to be submitted to another venue.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"SUMMARY OF REVIEW\\n\\nThis paper discusses an interesting problem of BO in the cases of robustness and antifragility to aleatoric noise/uncertainty. To tackle this problem, the authors replace the conventional homoscedastic GP model with a heteroscedastic GP model. In the case of robustness to aleatoric noise/uncertainty, the authors have modified EI by simply either scaling down [32] or subtracting from its value more when the aleatoric noise increases. In the case of antifragility to aleatoric noise/uncertainty, they do the opposite.\\n\\nThe modifications of EI to handle robustness and antifragility to aleatoric noise/uncertainty are simple and straightforward, one of which is similar to the augmented EI of [32].\\n\\nThere are some technical ambiguities, as detailed below. In particular, the choice of objective function (equation 9) for this problem needs to be justified and motivated by the practical applications described in Section 1.\\n\\nSince no convergence guarantee is given, a more extensive empirical analysis with real datasets needs to be provided to better understand the performance and behavior of the proposed BO algorithms. In particular, though the authors have motivated their problem using the compelling applications of materials and drug discovery, no experimental result for these applications has been provided to support the motivation of this work.\\n\\n\\n\\nDETAILED COMMENTS\\n\\nThe authors seem to motivate the significance of their problem of interest through the key applications of materials and drug discovery which I can appreciate. Unfortunately, experimental results in such applications were not available in this paper to \\\"close the loop\\\" in supporting the motivation of this work, begging the question whether their proposed BO algorithm indeed works for these key applications. For example, why is the FreeSolv hydration energy dataset not used for your experiments?\\n\\n\\nFor Fig. 1, how exactly do you extract the error magnitudes from the FreeSolv hydration energy dataset? How do you exactly define the notion of calculated vs. experimental uncertainties? Fig. 1 shows that the noise peaks with a relatively high frequency at a single error magnitude value. Would the assumption of homoscedastic noise at this peaked value be detrimental to BO? A sensitivity analysis would be useful here.\\n\\n\\nFor the soil phosphorus fraction dataset (Fig. 2), the skewed distribution of the measurements (few extremely large measurements and many small-valued measurements) may not be due to heteroscedastic aleatoric uncertainty. In fact, in the literature of earth/environmental science, such a dataset is often modeled using a log-Gaussian process (or log-normal kriging), that is, the log-measurements follow a GP:\\n\\nWebster, R., and Oliver, M. 2007. Geostatistics for Environmental Scientists. John Wiley & Sons, 2nd edition.\\n\\nCan the authors provide supporting evidence (in the form of references) that such a dataset is due to heteroscedastic aleatoric uncertainty?\\n\\n\\n\\nOn page 4, step 2 of the most likely heteroscedastic GP algorithm [28] cannot be understood: What is E[x]? Isn't G_1 a GP? Why is it able to accept x_i and D as inputs? How is z_i defined?\\n\\nThe authors say that \\\"A note on the form of this variance estimator is give in Appendix B.\\\" There is no Appendix B.\\n\\n\\nCan the authors give a detailed discussion why is the expression of f(x) = g(x) + s(x) in equation 9 the right one to be minimized in practice (e.g., in the context of materials and drug discovery)? For example, do material scientists use such an objective function? Provide references. Furthermore, why is the same equation 9 being minimized for both the cases of robustness and antifragility to aleatoric uncertainty?\\n\\nCaption of Fig. 3: I can't understand the sentence: \\\"The combined objective, which when optimised maximises the sin wave subject to the minimisation of aleatoric noise\\\". This was repeated in Section 5.5: \\\"finds the first maximum as that which minimises aleatoric noise\\\". Isn't the minimum of the aleatoric noise at the origin (Fig. 3b)?\\n\\nCan the authors provide information on how much initial data was provided prior to running BO? How much data is used for learning the GP hyperparameters?\\n\\nThe performance advantage of heteroscedastic ANPEI over homoscedastic does not appear to be significant for Branin-Hoo function (Figs. 5b and 6b). Can the authors explain this?\\n\\n\\n\\nMinor issues\\n\\nThere are two different font types of x in bold.\", \"page_5\": \"fixed aleaotric\\n\\nWhich phrase is correct? \\\"is obtained by subtracting the noise function from the 1D sinusoid\\\" in the caption of Fig. 3 or \\\"The objective function in all cases is the principal objective g(x) minus one standard deviation of the ground truth noise function s(x)\\\"?\\n\\nFig. 5b: Shouldn't the vertical axis be labeled as ...+ Noise?\\n\\nFig. 6a: Shouldn't the vertical axis be labeled as ...- Noise?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The main contribution of this paper are:\\n\\n1. The use of a heteroscedastic GP when performing Bayesian Optimization, this is in contrast to the more common practice of assuming homoscedastic noise, even when this does not quite fit the data. They use the existing algorithm called most likely heteroscedastic GP, and quote previous work that performed BO using different heteroscedastic GP implementations.\\n2. They introduce two new acquisition functions that incorporate the predicted observation noise, either making candidates more likely or less likely to be chosen when predicted noise is higher, depending on the requirements. This are fairly minor extensions/heuristics if taken on their own, as they do not provide a very strong motivation indicating why these acquisition functions are useful or better than existing ones, other than that they take heteroscedasticity into account.\\n3. They run a set of experiments on the above settings. Unfortunately the experiments are very limited, and their method does not improve on the baselines in a statistically significant way. Two of the experiments are on simple synthetic settings, the third approximates a real world setting, although the approximation is quite rough and they don't convincingly argue for it being realistic, neither do they give convincing motivation of their objective which uses g +- standard deviation.\\n4. They provide source code of their implementation.\\n\\nThe paper is easy to understand, and covers an interesting topic, so while I don't think it meets the bar of ICLR (due to lack of convincing and non-trivial contributions) I think it could perhaps be made into a workshop submission with some of the following changes:\\n* A wider set of experimental settings, and more replication such that any differences become statistically significant. It would also be worthwhile comparing to random search.\\n* A better justification of the objective used in the experiments, using g +- the standard deviation appears fairly arbitrary, and there is no strong enough reason to believe this is a good approximation of what the cost is in the case of real world problems.\\n* Better theoretical justification of the acquisition function; one option is to introduce more principled acquisition function like, say, expected upper/lower bound.\\n\\n\\nOther notes/comments:\", \"abstract\": \"as well as a real-world -> as well as *on* a real-world...\\nsection 1 \\\"As a case study\\\" -> not very clear what this means\\nincumbent best is not well defined, is it the empirical value of f used or the mean predicted f on the evaluated candidates?\\nsection 6.2 \\\"outperforms\\\" -> this is not clear if one looks at the confidence intervals in the results\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers the heterogeneous noise in Bayesian optimisation. The paper utilised the existing heterogeneous Gaussian process to model the surrogate function and proposes two acquisition functions from the Expected improvement to deal with such heterogeneous noise.\\n\\nThe proposed acquisition function is heuristic and straightforward from the existing ones, ie. augmented EI and EI. I would not count this as much in terms of novelty.\\n\\nThe idea of dealing with heterogeneous noise in BO is interesting.\\n\\nThe related background in Gaussian process is standard and could be omitted. \\n\\nThe experimental section is weak. The experiments are demonstrated using low dimensional functions (1-2 dim).\\n\\nThe experiment needs to compare with Random baseline. Under the noise setting, the Random approach performs very competitive to BO. \\nThe y-axis in the experiment, the paper has considered the objective function value + noise. The reviewer suspects that the high/low performance is due to the high level of noise (?) rather than the better objective function value.\\nThe released source code includes the Lidar, Scallop, Silverman datasets, but not the PHOSPHORUS soil. I would encourage the authors to check the source code before releasing.\\nThe paper focuses on demonstrating the heteroscedasticity in the surrogate model using [1]. The reviewer is wondering what is the performance for other heteroscedastic GP approaches, such as [2,3,4] ?\", \"minor_points\": \"Page 4, step number 2, what is $z_i$ and how can we get it in a new dataset D\\u2019.\\n\\n\\n[1] Kersting, Kristian, et al. \\\"Most likely heteroscedastic Gaussian process regression.\\\" Proceedings of the 24th international conference on Machine learning. ACM, 2007.\\n[2] Le, Quoc V., Alex J. Smola, and St\\u00e9phane Canu. \\\"Heteroscedastic Gaussian process regression.\\\" Proceedings of the 22nd international conference on Machine learning. ACM, 2005.\\n[3] Binois, M., Gramacy, R. B., & Ludkovski, M. (2018). Practical heteroscedastic gaussian process modeling for large simulation experiments. Journal of Computational and Graphical Statistics, 27(4), 808-821.\\n[4] L\\u00e1zaro-Gredilla, M., & Titsias, M. K. (2011, June). Variational Heteroscedastic Gaussian Process Regression. In ICML (pp. 841-848).\"}"
]
} |
rkx35lHKwB | Generalizing Reinforcement Learning to Unseen Actions | [
"Ayush Jain*",
"Andrew Szot*",
"Jincheng Zhou",
"Joseph J. Lim"
] | A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances. In this work, we address one such setting which requires solving a task with a novel set of actions. Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks. Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning. Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action. We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy. We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments. Our results and videos can be found at sites.google.com/view/action-generalization/ | [
"reinforcement learning",
"unsupervised representation learning",
"generalization"
] | Reject | https://openreview.net/pdf?id=rkx35lHKwB | https://openreview.net/forum?id=rkx35lHKwB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CUeWaYfwR",
"Syx57Rknir",
"Bye4_TyniS",
"ByxR76y2jr",
"BJxAN3y3sB",
"SJetIi12jB",
"rkxczGYx9H",
"Hkg_dJ1RFr",
"rJlWHQoTYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750219,
1573809697562,
1573809515705,
1573809445893,
1573809205601,
1573808977388,
1572012562121,
1571839856134,
1571824440547
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2487/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2487/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2487/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2487/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for reinforcement learning with unseen actions. More precisely, the problem setting considers a partitioned action space. The actions available during training (known actions) are a subset of all the actions available during evaluation (known and unknown actions). The method can choose unknown actions during evaluation through an embedding space over the actions, which defines a distance between actions. The action embedding is trained by a hierarchical variational autoencoder. The proposed method and algorithmic variants are applied to several domains in the experiments section.\\n\\nThe reviewers discussed both strengths and weaknesses of the paper. The strengths described by the reviewers include the use of the hierarchical VAE and the explanatory videos. The primary weakness is the absence of sufficient detail when describing the solution. The solution description is not sufficiently clear to understand the details of the regularization metrics. The details of regularization are essential when some actions are never seen in training. The reviewers also mentioned that the experiment analysis would benefit from more care.\\n\\nThis paper is not ready for publication, as the solution methods and experiments are not presented with sufficient detail.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper Revision Summary\", \"comment\": \"We would like to sincerely thank all the reviewers for their constructive comments. We have revised our paper to incorporate them. The updates are summarized as follows:\\n\\n1. [Approach Section] Revised details and definitions on regularization metrics\\nThe revision to method section (Section 3.4) mathematically draws a connection between statistical learning theory and our problem of generalization in RL. For each proposed regularization metric, we have added elaborate explanations, references for design choices, and training objectives. (Reviewer #1, Reviewer #3)\\n\\n2. [Experiments Section] Revised presentation of ablation studies\\n- For easier understanding of comparison with regularization metrics, we have renamed all the compared methods to have complete names with descriptions (Section 5.1). (Reviewer #1, #3)\\n- Figure 3 is revised to have consistent colors for ablations (red shades), embedding-related baselines (blue shades) and policy-related baselines (green shades) - across all environments. (Reviewer #2)\\n\\n3. [Experiments] Evaluation on more seeds, more actions for Recommender environment\\n- All experiments have been evaluated to have a total of 6 seeds and updated in Figure 3. (Reviewer #2)\\n- We increased the recommender system action space size from 1,000 to 10,000 for statistical sufficiency. (Reviewer #3)\\n\\n4. [Appendix] Additional analysis: How slow is fine-tuning compared to zero-shot generalization?\\nOur proposed method generalizes zero-shot to unseen actions by utilizing action datasets, and prevents expensive RL retraining. In contrast, it takes hundreds of thousands of environment steps to fine-tune a standard RL policy to achieve similar performance (Figure 10, Appendix B.3). This further validates improvements due to our method. (Reviewer #1)\\n\\n5. [Appendix] Additional details on: Pseudocode, Performance curves, Embedding modalities\\n- Added pseudocode (Appendix A) summarizing training and testing procedure. (Reviewer #2)\\n- Added performance curves for better analysis of the regularization metrics. (Reviewer #1, Reviewer #3)\\n- Added details on various data modalities (Appendix C) used for embeddings: trajectories of states, images, videos, ground-truth. (Reviewer #2)\"}",
"{\"title\": \"Response to Reviewer #3 (Part 1)\", \"comment\": \"We thank the reviewer for their valuable feedback and time. We have revised the presentation and explanations in the method and experiments section to address all the suggested improvements. We address each concern in detail below:\\n\\n\\n1. Explanation of model novelty.\\nThe novelty of our proposed method includes (a) learning representations of discrete actions from task-agnostic behavioral datasets such as trajectories and videos, (b) training stochastic policies which can extract task-specific information from learned action representations, (c) formalizing generalization over actions in RL and developing regularization techniques to avoid overfitting. We describe these contributions in detail below:\\n\\n(a) Learning action representations from behavioral datasets\\nSince actions can have diverse behaviors which cannot be explained by a single datapoint, we propose to represent actions with datasets of behaviors. We propose using a Hierarchical VAE, and demonstrate how hierarchy in the VAE leads to better representations of action datasets for downstream tasks in RL (Figure 3: comparison with non-hierarchical VAE).\\nAlso, the combination of trajectory autoencoders with Hierarchical VAE (HVAE) is novel for representation learning, to the best of our knowledge. We demonstrate representation learning on high dimensional trajectory datasets like videos (Figure 3: Ours (video) experiments).\\n\\n(b) Training stochastic policies over action representations\\nAs mentioned in related work (Section 2), some prior works have utilized action representations in Q-learning form to learn deterministic policies [1, 2] and some train actor-critic continuous policies to output in the action embedding space and select nearest neighbor action [3, 4]. In contrast, we propose to do this by defining a utility function over each available action, and then train a stochastic policy with policy gradients through a softmax over this utility function (Equation 2 and Section 3.3). In Figure 3, our proposed policy architecture is shown to outperform \\\"Distance based\\\" method (analogous to [3,4]), using the same learned representations, whereas Q-learning methods like [1, 2] do not learn stochastic policies.\\n\\n(c) Formalizing action space generalization in RL\\nThanks to the reviewer\\u2019s comments, we have revised section 3.4 to justify and exposit the novelty of our framework for generalizing over action space. Our core contribution is proposing regularization techniques to enable generalization to unseen actions in RL, by drawing a connection from statistical learning theory. We discuss how iid assumptions necessary for generalization are violated in on-policy RL, and propose 3 ways to make the training data distribution to be more uniform over known actions. Figure 3 validates the experimental contributions of each of these proposed metrics in various environments.\\n\\nWe would like to emphasize that this paper proposes a novel problem of generalization over action space in RL, demonstrates a combination of unsupervised representation learning with reinforcement learning as a downstream task, and provides two new environments, CREATE and Shape-Stacking, for benchmarking performance on unseen action spaces.\\n\\nReferences\\n[1] He, Ji, et al. \\\"Deep reinforcement learning with a natural language action space.\\\" arXiv preprint arXiv:1511.04636 (2015).\\n[2] Tennenholtz, Guy, and Shie Mannor. \\\"The Natural Language of Actions.\\\" International Conference on Machine Learning. 2019.\\n[3] Van Hasselt, Hado, and Marco A. Wiering. \\\"Using continuous action spaces to solve discrete problems.\\\" 2009 International Joint Conference on Neural Networks. IEEE, 2009.\\n[4] Dulac-Arnold, Gabriel, et al. \\\"Deep reinforcement learning in large discrete action spaces.\\\" arXiv preprint arXiv:1512.07679 (2015).\"}",
"{\"title\": \"Response to Reviewer #3 (Part 2)\", \"comment\": \"2. Reasoning for proposed method design choices.\\nWe have incorporated the reviewer\\u2019s helpful comments by adding more references and explanations for various design choices in the method section:\\n\\n- Trajectory autoencoders:\\nIn Section 3.2, we follow prior work on trajectory autoencoders [5, 6] for the choice of using Bi-directional LSTM encoder [7] and LSTM decoder. Our work extends hierarchical VAE to trajectory settings, when the action datasets are composed of state trajectories or videos. We have added these references in the paper.\\n\\n- Using distribution mean as representation:\\nIn section 3.2, we have added justification for using the encoder\\u2019s output mean as an action representation by referencing prior work in representation learning like [8, 9]. We further note that the encoder\\u2019s output distribution (mean and standard deviation) can also be used as a representation, as done in [10].\\n\\n- Design choices for enabling generalization:\\nWe have revised section 3.4 to explain the components of regularization in detail, and added suitable references to justify design choices. Specifically, we add details for connection with statistical learning theory, discuss the need for regularization metrics, revise justification and add references for the principle of maximum entropy, and methodically develop each regularization metric.\\n\\n\\n3. More definitions about model.\\nWe have added detailed explanations, definitions and references in our revision to Section 3.4 on enabling generalization in RL. We clearly redefine each term used in the equations, such as optimal action distribution y^* for a given state, loss function L measuring the optimality of a policy. The proposed regularization metrics are defined in mathematical terms, and overall training objective is added to Equation 7. We also added Algorithm pseudocode in Appendix A for summarizing the method (suggested by reviewer 2).\\n\\n\\n4. Statistical sufficiency in action datasets.\\nWe have increased the number of actions in the recommender environment to 10,000, re-ran all experiments for this environment, and updated the results in Figure 3. For reference, sizes of action space in other environments: Grid World has 1024 macro-actions, CREATE has 1,739 tools and Shape-Stacking has 900 shapes. Furthermore, we increased the number of seeds for all experiments to be 6 for more statistically significant results (suggested by reviewer 2).\\n\\n\\n5. Validating action regularization contributions.\\nEach of the proposed regularization metric\\u2019s contribution can be seen in Figure 3 (shades of red are these ablations). In summary, having entropy-regularization and changing-action-space contribute most to the performance while action-clustering can boost performance on challenging environments such as CREATE Navigate and Obstacle.\\n\\nWe thank the reviewer for pointing out the need for further clarity in the action regularization contributions. While the original submission contains ablation studies for each regularization metric, notated with \\\"NE\\\", \\\"FX\\\" and \\\"RS\\\", we have renamed these ablations for better clarity: \\n\\\"Ours: no entropy\\\": without maximum entropy regularization (previously NE). \\n\\\"Ours: no changing\\\": without changing action spaces regularization (previously FX)\\n\\\"Ours: no clustering\\\": without clustering similar actions (previously RS). \\nWe have also revised section 5.1 \\\"Baselines and Ablations\\\" and Figure 3 to clearly define each compared method. For further analysis of regularization metrics, we have added Figure 9 in Appendix section B.2 which compares success rate curves for ablations.\\n\\n\\nReferences\\n[5] Wang, Ziyu, et al. \\\"Robust imitation of diverse behaviors.\\\" Advances in Neural Information Processing Systems. 2017.\\n[6] Co-Reyes, John, et al. \\\"Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings.\\\" International Conference on Machine Learning. 2018.\\n[7] Schuster, Mike, and Kuldip K. Paliwal. \\\"Bidirectional recurrent neural networks.\\\" IEEE Transactions on Signal Processing 45.11 (1997): 2673-2681\\n[8] Higgins, Irina, et al. \\\"beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.\\\" ICLR 2.5 (2017): 6.\\n[9] Steenbrugge, Xander, et al. \\\"Improving generalization for abstract reasoning tasks using disentangled feature representations.\\\" arXiv preprint arXiv:1811.04784 (2018).\\n[10] Locatello, Francesco, et al. \\\"Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations.\\\" International Conference on Machine Learning. 2019.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for their valuable feedback and time. We have updated the style of experiments section to address all the suggested improvements:\\n\\n1. Additional seeds in results.\\nWe have updated all the experimental results displayed in Figure 3, to be evaluated over 6 seeds. In Figure 9 (Appendix B.2) we have also added success rate curves, showing variation across different seeds as training progresses, for our method and ablations in the CREATE environment.\\n\\n2. Definition of \\\"im\\\" and \\\"gt\\\" settings.\\nIn response to the reviewer\\u2019s feedback, we have renamed the \\\"im\\\" setting to \\\"Ours (video)\\\" and \\\"gt\\\" setting to \\\"Ours (ground truth)\\\" for clarity. We have also added a detailed explanation for these settings as well as other baselines and ablations in the revised Section 5.1 \\\"Baselines and Ablations\\\". Description of these settings are:\\n(a) \\\"Ours (video)\\\": This setting is for the CREATE environment, where the action dataset used for learning embeddings is composed of videos (sequence of image-based environment states). By default, the CREATE results on \\\"Ours\\\" are based on environment state trajectories.\\n(b) \\\"Ours (ground truth)\\\": shows the performance of our method with manually engineered action embeddings for CREATE and Grid world environments.\\nWe have added comprehensive details for various alternate embeddings that were tested, under the \\\"Action Dataset\\\" subsections in Appendix C for each environment.\\n\\n3. Recolored Figure 3. \\nWe thank the reviewer for the helpful comments on presentation. We have updated the colors in Figure 3 to be consistent across environments, and also clearly delineate the color codes for our method, ablations (shades of red), embedding-method baselines (shades of blue), policy architecture baselines (shades of green), and alternate kinds of learned embeddings (yellow). \\n\\n4. Algorithm pseudocode.\\nWe have added pseudocode of our training and testing algorithm to appendix section A.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for their valuable feedback and time. We have made several changes to the presentation of the method and experiment section to address the reviewer\\u2019s concerns. We respond to each concern below:\\n\\n1. Details of regularization metrics\\n\\nTo improve the presentation of regularization metrics, we have revised the method section in the paper. Section 3.4 now provides detailed explanations, references for each design choice and methodically develops the regularization metrics.\\n\\nSummary of Section 3.4 revision: Equation 3 draws the connection with statistical learning theory and applies it to Reinforcement Learning (RL). We discuss how the iid assumptions are violated in on-policy RL which can hurt generalization. Equation 4 describes how usual reward maximization objective is analogous to Empirical Risk Minimization. We revise details of why regularization is needed and develop the proposed metrics with clear mathematical definitions and detailed explanations. Equations 5, 6, and 7 walk through how each of the proposed metrics modify the training objective to enable better generalization at test time.\\n\\n\\n2. Analyzing regularization metrics in experiments.\\n\\nThe contribution of each proposed regularization metric can be seen in Figure 3 (shades of red are these ablations). In summary, having entropy-regularization and changing-action-space contribute most to the performance while action-clustering can boost performance on challenging environments such as CREATE Navigate and Obstacle (detailed discussion in Section 5.2)\\n\\nWe thank the reviewer for pointing out the need for further clarity in the experiment section. We note that the original submission contains these ablation studies for each regularization metric, notated with \\\"NE\\\", \\\"FX\\\" and \\\"RS\\\". We have defined these ablations clearly (Section 5.1) and renamed them for better clarity: \\n\\\"Ours: no entropy\\\": without maximum entropy regularization (previously NE). \\n\\\"Ours: no changing\\\": without changing action spaces (previously FX)\\n\\\"Ours: no clustering\\\": without clustering similar actions (previously RS). \\nFor further analysis of regularization metrics, we have added Figure 9 (Appendix B.2) which compares success rate curves for ablations.\\n\\n\\n3. Comparisons against prior work on action representations.\\nTo the best of our knowledge, it is hard to directly apply other prior related works on action representation to the proposed problem of this paper (i.e. generalization to unseen actions).\\n\\nThe followings describe why prior work on action representations is not well-suited (also described in Section 3):\\n- [1] assumes access to action embeddings and proposes training continuous policies whose output vector is used to select the closest action embedding. The \\\"Distance-based\\\" method (Section 5.1 and Figure 3) is analogous to [1], and we tailor it for the problem of generalization by comparing nearest action embeddings only from the set of available actions. Our method far outperforms this baseline by extracting task-specific information from the action embeddings as input. \\n- While [2] deals with the problem of learning action representations, it is not suitable for generalization to unseen actions. This is because the method requires a fixed discrete action space to learn action representations implicitly, and hence cannot accommodate any new actions without a retraining period. \\n- While [3] proposes to pre-learn action representations, the method requires task-specific demonstrations, which reflect the co-occurrence of actions while solving certain tasks. This cannot be extended to unseen actions, as it is not reasonable to assume task-specific demonstrations for them.\\n\\nOther baseline methods, not directly associated with any prior work, are described in Section 5.1. Among action representation baselines, we compare against a non-hierarchical VAE model - to assess the importance of hierarchy in learning action representations from datasets. We also compare performance against manually engineered ground truth action representations.\\n\\nAdditionally, following the insights of the reviewer to further validate the improvements due to our method, we performed an additional experiment on fine-tuning a well-trained discrete action policy. We show how long it takes to retrain the policy when the final layer is re-initialized for the unseen action set. The results in Figure 10 (Appendix B.3), show that it can take hundreds of thousands of environment steps to achieve performance similar to our method which directly generalizes zero-shot to any new action set.\\n\\nReferences\\n[1] Dulac-Arnold, Gabriel, et al. \\\"Deep reinforcement learning in large discrete action spaces.\\\" arXiv preprint arXiv:1512.07679(2015).\\n[2] Chandak, Yash, et al. \\\"Learning action representations for reinforcement learning.\\\" arXiv preprint arXiv:1902.00183(2019).\\n[3] Tennenholtz, Guy, and Shie Mannor. \\\"The natural language of actions.\\\" arXiv preprint arXiv:1902.01119 (2019).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper deals with the problem of how to enable the generalization of discrete action policies to solve the task using unseen sets of actions. The authors develop a general understanding of unseen actions from their characteristic information and train a policy to solve the tasks using the general understanding. The challenge is to extract the action's characteristics from a dataset. This paper presents the HVAE to extract these characteristics and formulates the generalization for policy as the risk minimization.\", \"strengths\": \"1. This paper shows us how to represent the characteristics of the action using the a hierarchical VAE.\\n2. From the provided videos, we can directly observe the results of this model applied to different tasks.\", \"weaknesses\": \"1. In the paper, the authors mentioned that they proposed the regularization metrics. However, they didn't describe them in details. It is important to develop the proposed method in theoretical style.\\n2. Analyzing the regularization metrics should be careful in the experiments.\\n3. Since there are many previous works related to action representation, the experiments should contain the comparison with the other method to see how much improvement was obtained.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies the problem of generalization of reinforcement learning policies to unseen spaces of actions. To be specific, the proposed model first extracts actions\\u2019 representations from datasets of unstructured information like images and videos, and then the model trains a RL policy to optimize the objectives based on the learned action representations. Experiments demonstrate the effectiveness of the proposed model against state-of-the-art baselines in four challenging environments. This paper could be improved in the following aspects:\\n1.\\tThe novelty of the proposed model is somewhat incremental, which combines some existing methods, especially the unsupervised learning for action representation part that just combines methods such as VAE, temporal skip connections\\u2026\\n2.\\tSome components of the proposed methods are ad hoc, and are not explained why using this design, such as why Bi-LSTM for encoder and why LSTM for decoder.\\n3.\\tMore definitions about the model should be offered, such as \\u201cy^\\u2217 is some optimal action distribution\\u201d, how to get the optimal action distribution?\\n4.\\tSome datasets are not sufficient enough for sake of statistical sufficiency, such as recommendation data with only 1000 action space.\\n5.\\tThe contributions of action regularizations are not validated on experiment section.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper addresses the very interesting problem of generalising to new actions after only training on a subset of all possible actions. Here, the task falls into different contexts, which are inferred from an associated dataset (like pixel data). Having identified the context, it is used in the policy which therefore has knowledge of which actions are available to it.\\n\\nIn general, this paper is written very well, with some minor suggested tweaks in the following paragraphs. The key strategies used here (HVAE to identify contexts, ERM as the objective, entropy regularisation, etc) all make sense, and are shown to work well in the experiments carried out. \\n\\nWhile the experiments are sufficiently varied, it worries me that only 3 or 2 seeds were used. In some cases, such as NN and VAE in the CREATE experiments show large variances in performance. Perfects a few more seeds would have been nice to see. This is the key reason why I chose a 'Weak Accept' instead of an 'Accept'.\\n\\nSome of the results (the latent spaces) shown in the appendix are very interesting too, particularly since they show how similar actions spaces cluster together in most cases.\", \"minor_issues\": \"1) In Figure 3, I am not clear about what 'im' and 'gt' settings are.\\n2) In Figure 3, it would have been nice to have consistent colors for the different settings.\\n3) It would have been nice to see the pseudocode of the algorithm used.\"}"
]
} |
HkxnclHKDr | Provable Representation Learning for Imitation Learning via Bi-level Optimization | [
"Sanjeev Arora",
"Simon S. Du",
"Sham Kakade",
"Yuping Luo",
"Nikunj Saunshi"
] | A common strategy in modern learning systems is to learn a representation which is useful for many tasks, a.k.a, representation learning. We study this strategy in the imitation learning setting where multiple experts trajectories are available. We formulate representation learning as a bi-level optimization problem where the "outer" optimization tries to learn the joint representation and the "inner" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the cases where the imitation setting being behavior cloning and observation alone. Theoretically, we provably show using our framework that representation learning can reduce the sample complexity of imitation learning in both settings. We also provide proof-of-concept experiments to verify our theoretical findings. | [
"imitation learning",
"representation learning",
"multitask learning",
"theory",
"behavioral cloning",
"imitation from observations alone",
"reinforcement learning"
] | Reject | https://openreview.net/pdf?id=HkxnclHKDr | https://openreview.net/forum?id=HkxnclHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"yM4Y_BPtYj",
"Hkl3lr2jsH",
"HygXYMhjoS",
"SJl0fuCtor",
"SJl5CNTDiH",
"HkleVc2wjB",
"HygbjvhwoB",
"rJx5HLM-oB",
"r1lKVtETYH",
"BJxHdQmqFS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750189,
1573795060361,
1573794426754,
1573672982397,
1573536977529,
1573534248292,
1573533592764,
1573099074246,
1571797297384,
1571595117424
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2486/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2486/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2486/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2486/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2486/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a methodology for learning a representation given multiple demonstrations, by optimizing the representation as well as the learned policy parameters. The paper includes some theoretical results showing that this is a sensible thing to do, and an empirical evaluation.\\n\\nPost-discussion, the reviewers (and me!) agreed that this is an interesting approach that has a lot of promise. But there was still concern about he empirical evaluation and the writing. Hence I am recommending rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for responding! We have uploaded a revision and re-phrased the problematic wording in the introduction.\\n\\nWe added more description about the representation algorithm we test and the baseline that we compare against in the experiments section. Furthermore, we added implementation details in the appendix.\\n\\nYour suspicion is correct and indeed we see further improvement in DirectedSwimmer by using 16 experts (we updated the plot to include this as well). Note, however, that the main point of this experiment is to show that representation helps learn a good policy *much faster* (i.e. with fewer samples) than learning a policy from scratch (i.e. the baseline). This is true even for 8 experts in the previous plot (before 20K steps). However, it is inevitable for the baseline to eventually perform the best with many more samples, since it is allowed to learn an optimal representation from scratch. We updated the plot in the revision by zooming into the initial stages so that the benefit of representation learning can be seen more clearly in the few samples regime.\"}",
"{\"title\": \"Revision uploaded\", \"comment\": [\"We thank all the reviewers for providing useful feedback and comments! We made the following main changes in our revision\", \"Clarified our precise contribution in the introduction (as pointed out by reviewer #4) and made a more detailed comparison with recents works in the related work section, based on feedback from reviewer #1. The key difference is that *previous work* either showed guarantees for multi-task *supervised learning* or *single task* imitation learning or showed guarantees for gradient based meta-learning methods *only for convex losses*. Our analysis can show guarantees for multi-task imitation learning methods for arbitrary representation function classes that can make the loss non-convex.\", \"Added PAC style sample complexity bounds in Corollaries 5.1 and 6.1 for the simple case of finite representation function class and gave some more intuition about what the theorem statements mean. Hope this clarifies the points raised by both reviewers #1 and #4.\", \"Added more details about the experimental setup, emphasized our algorithm and the baseline method, and included error bars in the plots, as requested by reviewer #4.\", \"Rephrased the discussion about the 3 properties right after informal theorem 3.1. In particular, these properties are not assumptions that we make, but we prove them for our two settings. Hopefully the revision clarifies the confusion that reviewer #2 had.\", \"Fixed typos, notations and formatting issues that were pointed out by reviewers #1 an #4\"]}",
"{\"title\": \"Some more comments/questions based on the authors' response\", \"comment\": \"Thank you for taking the time to respond to my previous comments. Based on your responses, I had some more comments/questions:\\n\\n\\\"First, we would like to emphasize that the main aim of this paper is to demonstrate the statistical advantage of representation learning for imitation learning, in a mathematically rigorous way.\\\" and\\n\\\"The algorithms we run are all natural extensions of representation learning methods for supervised learning, with a few tweaks.\\\" \\nI see. This wasn't clear from the wording used in the paper, e.g., in the Introduction, it is mentioned that \\\"*Furthermore*, the framework allows us to do a rigorous analysis to show provable benefits of representation learning for imitation learning\\\", which indicates that this is an additional contribution. \\nGiven the above statements, I think the wording should be re-phrased in order to draw the line between existing work and this work more clearly. \\n\\nYou also mention that you do not propose any new algorithm. Even if that is the case, it would greatly help a reader if the algorithm used to conduct the experiments were explicitly specified in pseudo-code form or something. Additionally, the baselines used in both tasks should also be explicitly specified to help the reader gauge the benefits of a new/different approach. Because of this (and the points mentioned previously), as of now, it is a little hard to interpret the experimental section and draw concrete conclusions from it, despite the rigorous theoretical work.\", \"an_additional_question_based_on_the_experiments\": \"is there a particular reason why data from more experts was not used in DirectedSwimmer? There is some evidence that the baseline performs better as compared to learning representations using data from <=8 experts in both the domains, but a larger number of experts seems to help in the second domain, NoisyCombinationLock. Did you see a similar trend in DirectedSwimmer as well?\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for the detailed review. First, we would like to emphasize that the main aim of this paper is to demonstrate the statistical advantage of representation learning for imitation learning, in a mathematically rigorous way. The experiments in our paper, like many machine learning theory papers, are meant as proof-of-concepts and mainly verify the theoretical results.\\n\\nPlease find our responses to your detailed comments below. We will add more clarifications to avoid confusions in our next version soon.\", \"comparing_with_other_baselines\": \"We would like to clarify that we do not propose any new algorithm. The bi-level optimization framework is introduced for problem formulation and ease of analysis. The algorithms we run are all natural extensions of representation learning methods for supervised learning, with a few tweaks. We do not claim that our algorithm is more sample efficient than existing meta learning algorithms or that it beats them.\\n\\n\\u201cA better baseline would be one that learns some representation from the T previous tasks, which would help infer if the proposed method to learn representations is actually more sample efficient on new tasks or not.\\u201d: In fact, the algorithm we test is precisely of this form. The algorithms mentioned in related work section do not learn a fixed representation and hence we do not test these methods. We will clarify this point and add a more detailed comparison to some previous works in our revision soon.\", \"minor_comments\": \"We will add error bars to our plots, add some more intuition for results and fix other formatting issues and typos. Thanks for pointing them out!\"}",
"{\"title\": \"Response\", \"comment\": \"We thank you for the careful and positive review. We will modify the paper according to your comments. Please find our responses to your comments below.\\n\\n1. \\u201cThe paper lacks a good literature review to place this work in the right context.\\u201d We will add more discussions on the literature. Our work bridges the multitask representation learning literature for supervised learning (Maurer et al., 2016) and single task imitation learning methods (Ross and Bagnell, 2010; Sun et al., 2019). The additional factor of H^2 is incurred while connecting the imitation loss function to the total cost in the MDP; this factor of H^2 is common in imitation learning and occurs both in Ross and Bagnell, 2010 and Sun et al., 2019. The bi-level framework is an abstraction of Maurer et al. that lets us go beyond supervised learning losses and can potentially be used for other imitation and reinforcement learning settings. We will make these points clearer in the revision.\\n\\n2. It is straightforward to derive PAC-style sample complexity bounds using the existing bounds in Theorem 4.1 and Theorem 5.1. We will add such a bound in our next version soon. Again the H factor comes from the error propagation in imitation learning.\\n\\n3. We will add more discussions on gradient based meta-learning and bi-level optimization. One key difference from previous theoretical analyses is that they either deal with computational complexity (which we do not), or show sample complexity/regret guarantees for *convex* losses, whereas our analysis can deal with any function class.\\n\\nWe will fix other minor issues accordingly. Thanks for carefully reading the paper and pointing them out!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for review. The three properties listed on page 3 are just high-level descriptions of the sufficient conditions that enable us to show reduced sample complexity for representation learning. We *do not assume* these properties, but we *prove* that they hold for both the standard settings of behavioral cloning and observations only. Abstracting the proof into these three properties gives us a general recipe to potentially prove such a theorem for other settings as well. We will add clarifications on these properties in the revision to avoid confusions.\\n\\nThe precise assumptions for Theorem 4.1 are stated in Section 4 and assumptions for Theorem 5.1 are stated in Section 5.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Overview:\\n\\nThe paper tackles the representation learning problem where the aim is to learn a generic representation that is useful for a variety of downstream tasks. A two-level optimization framework is proposed: an inner optimization over the specific problem-at-hand, and an outer optimization over other similar problems. The problem is studied in two settings of the imitation learning framework with the additional aim of providing mathematical guarantees in terms of sample efficiency on new tasks. An extensive theoretical analysis is performed, and some preliminary empirical results are presented.\", \"decision\": \"In its current form, the paper should be rejected because (1) the empirical analysis is incomplete \\u2013 the baseline isn't very appropriate, the results are not conclusive, details are scattered or not included, (2) the literature survey does not connect the proposed approach with existing approaches, and does not convince the reader why all the existing approaches have not been compared against empirically, (3) the paper is generally unpolished and needs more work before being considered for acceptance.\", \"details\": \"The paper makes both theoretical and empirical claims. I did not have the time to thoroughly verify the theoretical claims and took them at face value. I consider the theoretical guarantees associated with the proposed approach a welcome and valuable contribution to this field that has recently been relying primarily on limited empirical work to assess any method. \\n\\nThe empirical results presented in the paper do not sufficiently support the claims of sample efficiency. One of the main issues with the empirical analysis is the choice of the baseline, which learns a policy from scratch. This does not help make conclusions about the sample efficiency of the proposed method on new tasks. A better baseline would be one that learns some representation from the T previous tasks, which would help infer if the proposed method to learn representations is actually more sample efficient on new tasks or not. There is also no comparison with existing approaches that are mentioned in the Related Work section. If those aren\\u2019t appropriate baselines for this problem, a small explanation of the reasons why would help readers understand why they haven\\u2019t been compared against. Additionally, an analysis of statistical significance of the results is missing and would significantly help in gauging the efficacy of the proposed approach. \\n\\nThe paper notes that these are some preliminary experiments. The completion of the empirical analysis would definitely make a stronger case for this paper to be accepted.\", \"minor_comments_to_improve_the_paper\": [\"Error bars in the plot, specification of number of runs, and other such experimental details would be very helpful in interpreting the results.\", \"It would help a reader if the paper was more self-contained, e.g., if terms like supp(\\\\eta), \\\\bar{s}, \\\\tilde{s} are defined more clearly.\", \"It would also help to say what the proofs intuitively mean, e.g., for a new task drawn from this particular distribution of tasks, the agent would achieve close-to-X performance within Y samples \\u2013 something along those lines.\", \"There are some typos, e.g., 'possibility'->'possibly' on page 1, missing $H$ in specification of MDPs on page 2, 'exiting'->'exciting' on page 8, some latex symbols in Appendix D, etc.\", \"The bibliography has a lot of issues \\u2013 some references are incorrectly parsed (e.g., Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. 03 2017), others are inconsistent (e.g., \\\"In NIPS\\\" and \\\"In Advances in Neural\\u2026\\u201d; the arXiv ones).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper theoretically explores reinforcement/imitation learning via representation learning. The key theoretical question being investigated is the relationship between representation learning in a multi-task/meta learning setup and its dependence to the sample/task complexity. The paper sets up the problem in bilevel optimization framework, where the inner optimization learns/optimizes task specific losses, while the outer optimization learns the representation used in the inner level tasks. The main takeaway from the two theorems (which are the core contributions of this paper) are that when the number of tasks is higher than the number of samples, representation learning can reduce the sample complexity. The paper explores two scenarios in imitation learning, namely behavioral cloning and when only the states of the experts are available (and not their actions). Some experiments are provided to empirically validate the theory.\", \"pros\": \"1. The paper presents a theoretical investigation into multi-task/meta learning for RL via learning representations. While, the main theoretical contributions are perhaps marginal with regard to prior work, the problem setting (RL) seems novel and the two theorems in this context are interesting.\\n2. The paper is well-written and appears to be very rigorous. I did not check for the correctness of all technical parts. There are several \\\"abuse of notations\\\" in the main text, which sometimes impact the otherwise smooth read of the technical parts. \\n3. A good and concise review of RL concepts is provided.\", \"cons\": \"1. The paper lacks a good literature review to place this work in the right context. For example, while the paper refers to the works of Maurer 2016 and Sun et al. 2019 at several places, it is not formally and clearly mentioned anywhere what are the similarities to these prior works and what are the new contributions. For example, Maurer 2016 proposes a multi-task learning setup using representation learning, and most of Theorem 4.1 in this paper is taken from the results in that paper. While, the current paper uses bilevel optimization setting in an RL context, it is not clear to me if this (bilevel + RL) setting has any significant bearing against the theoretical results furnished by Maurer 2016. For example, the theorems in this paper (as far as I see) show that the bounds are scaled by a constant defined by H^2, the trajectory length? If there is something beyond this, then the paper needs to explicitly point it out. The same comment goes with the results against Sun et al. 2019 in Theorem 5.1.\\n\\n2. Given that the main goal of this paper is to connect sample complexity with representation learning, it is important the paper provide a theorem stating this precisely. Theorems 4.1 and 5.1 provide a general bound, and the sample complexity is being described in the explanations of this theorem, which is very informal. Also, against what is claimed in the abstract, it appears that representation learning helps reduce sample complexity only when the number of tasks are larger, which perhaps needs to be explicitly mentioned. Also, note that there is a bearing of the bound on the trajectory length H (Theorem 4.1, and 5.1). Shouldn't this factor be also accounted for when explaining the sample complexity? \\n\\n3. There has been several recent works on model-agnostic meta-learning (that also uses bilevel optimization and implicit gradients), however, older works on meta-learning (only for imitation learning) have been cited. The paper should include more recent works in this area and contrast against their theoretical findings. \\n\\nApart from these, below are some minor comments that could help improve the reading of this paper:\\na. Theorem 3.1 is not really a theorem, since it is very informal. Also, fix the Theorem numbers. \\n\\nb. Bullet 1. after Theorem 3.1, \\\\ell^x(\\\\pi) concentrates to \\\\ell^x(\\\\pu*). Also, the mention about sample complexity here should be backed with some reference/citation.\\n\\nc. Assumption 4.2: The notation \\\\pi_\\\\mu(s)_{\\\\pi*(\\\\mu(s)) is unclear, shouldn't the subscript contain an argmax over the actions for \\\\pi*? \\n\\nd. Theorem 4.1 and 5.1, what does it mean by \\\"probability 1-\\\\delta over the choice of the dataset X\\\" ? Also, \\\\mu^n seems undefined.\\n\\ne. The first two terms in Theorem 4.1 are claimed standard, provide the citations?\\n\\nf. Theorem 4.1, perhaps use some other notation for c, which is defined as the cost/reward in the RL setting.\\n\\nOverall, the paper has some interesting theoretical results, and is mathematically rigorous, however lacks a clear distinction from prior and more recent works in this area.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"(I bid on the paper thinking that bi-level optimisation would play a major role in the paper. Unfortunately, it does not, so my expertise in bi-level optimisation is not much use, I am afraid.)\\n\\nThe authors study applying policies learned on one task to another task, while considering practical finite-sample limitations. They call this the \\\"representatition learning for imitation learning\\\". Unfortunately:\\n\\nThe make extensive assumptions, summarised on page 3, but not formalised as Assumptions 1, 2, 3 anywhere, as far as I can tell. They assume: \\n-- concentration of the loss, \\\"which guarantees within-task sample efficiency\\\" (but does not seem easy to support in practice? actually, one may observe samples from a stochastic process, rather than iid samples with any concentration what-so-ever?), \\n-- that the \\\"loss\\\" they use with is somehow close to the optimum of the expected value function J (for which I again see no justification, empirical or otherwise).\\n\\nThen they reuse results of Maurer et al (2016) and extend them to a case where the actions are not observable. The results are plausible, given the assumptions. Given the assumptions, they also do not seem to be particularly relevant to the practice of RL?\\n\\nThe empirical results involve only benchmarks of the authors own coinage, and hence are hard to evaluate. It seems plausible, again, however, that the approach may work in some cases.\"}"
]
} |
HkxjqxBYDB | Episodic Reinforcement Learning with Associative Memory | [
"Guangxiang Zhu*",
"Zichuan Lin*",
"Guangwen Yang",
"Chongjie Zhang"
] | Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models. | [
"Deep Reinforcement Learning",
"Episodic Control",
"Episodic Memory",
"Associative Memory",
"Non-Parametric Method",
"Sample Efficiency"
] | Accept (Poster) | https://openreview.net/pdf?id=HkxjqxBYDB | https://openreview.net/forum?id=HkxjqxBYDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"9zYwbCAcfz",
"S1l5LRX2sH",
"S1eHzYEjsB",
"Hyeklt4siB",
"SkxXK_4osH",
"B1eG6-ZzcB",
"rJeP9zTptr",
"HkgePooptS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750158,
1573826129596,
1573763341044,
1573763303184,
1573763195435,
1572110778145,
1571832463484,
1571826519874
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2485/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2485/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2485/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2485/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2485/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2485/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2485/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The submission tackles the problem of data efficiency in RL by building a graph on top of the replay memory and propagate values based on this representation of states and transitions. The method is evaluated on Atari games and is shown to outperform other episodic RL methods.\\n\\nThe reviews were mixed initially but have been brought up by the revisions to the paper and the authors' rebuttal. In particular, there was a concern about theoretical support and the authors added a proof of convergence. They have also added additional experiments and explanations. Given the positive reviews and discussion, the recommendation is to accept this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Overall response\", \"comment\": \"We thank all reviewers for their efforts and thoughtful comments, which are helpful for improving the quality of our paper. In the updated paper, we have revised our manuscript according to their suggestions. Below, we describe in detail how we have modified our paper to address the reviewers\\u2019 feedback.\\n\\n1. We add a theoretical proof (Appendix A) to prove that our graph-based value propagation algorithm can converge to unique optimal value. Our algorithm provides a non-parametric estimation of the optimal Q function based on the graph of all observed transitions. This non-parametric estimation can be used as a lower bound for the parametric Q network and thus help boost up the learning of Q network.\\n\\n2. We clarify our contributions focus on near-deterministic environments that are always taken as a basic assumption in conventional episodic reinforcement learning. We also discuss how stochastic environments affect our algorithm and provide possible ways to extend it to highly stochastic environments. \\n\\n3. We add experiments to verify our superior performance benefits from associative memory rather than representations (e.g., random projection).\\n\\n4. We refine the presentation of our paper as suggested by the reviewers, and add additional descriptions to discuss when our method works and when it doesn't.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the thoughtful comments and suggestions.\\n\\nDQN and the non-parametric estimation seems to use the same data and learn based on same information eventually. However, DQN distills the information much slower than the non-parametric estimation because of the following three reasons. \\n\\n(1) Since DQN uses neural networks with stochastic gradient descent (SGD) optimization, small learning rates are required to stabilize training during the optimization on random minibatchs, and high learning rates can cause catastrophic interference due to the global approximation nature of neural networks [1]. However, an agent with a non-parametric memory can directly latch on successful policies as soon as they are experienced instead of waiting for many steps of optimization. The requirement for slow updating in SGD also accounts for why more aggressive training of DQN does not help. Factually, we have tried many aggressive settings of hyperparameters for DQN, and found that the hyperparameters reported in the original DQN paper are the best (which are also used in our paper as baseline).\\n\\n(2) DQN randomly samples experience tuples from replay buffer to update value function, which neglects the trajectory nature of an agent\\u2019s experience (i.e., one tuple occurs after another and so information of the next state should be quickly propagated into the current state). Instead, we build a graph based on transitions of states and associate related experience trajectories, which enables both intra-episode and inter-episode value propagation (the inter-episode path is like augmentation of counterfactual combinatorial trajectories). When a reward signal is first discovered at one state, the related states can quickly receive this information through the intra-episode or inter-episode non-parametric value propagation. \\n\\n(3) DQN uses random sampling for value-bootstrapping, while we develop an efficient reverse-trajectory propagation strategy to allow rapid value propagation through the graph. Recently, an intra-episode value-bootstrapping has been adopted in RL [2] and demonstrates superior performance of reverse-trajectory propagation compared to random sampling. \\n\\nWe also add experiments to verify our superior performance benefits from associative memory rather than representations (e.g., random projection). As shown in Appendix Figure 5, DQN with only random projections as inputs has a much worse performance than the vanilla DQN. Because random projection is a very simple representation that is only used for dimension reduction and does not contain useful high-level feature or knowledge (e.g., objects and relations). Our future work will focus on incorporating advanced representation learning approaches that can capture useful features into our framework to support more efficient memory retrieval and further boost up the performance.\\n\\nWe have added these results and descriptions in the revised paper. We would like to thank you again for the thoughtful questions and constructive suggestions.\", \"references\": \"[1] McCloskey, M., & Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation (Vol. 24, pp. 109-165). Academic Press.\\n[2] Lee, S. Y., Choi, S., & Chung, S. Y. (2018). Sample-efficient deep reinforcement learning via episodic backward update. arXiv preprint arXiv:1805.12375.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the thoughtful comments and suggestions. In the following, we address the concerns point by point.\\n\\nQ1. First, it lacks theoretical rigor to explain why the proposed propagation mechanism works.\\n\\nA1. We add a theoretical proof in Appendix A that our graph-based value propagation algorithm can converge to unique optimal value in the general RL setting not only the navigation tasks.\\n\\nQ2. This in itself would not be much of an issue if the experiments highlighted advantages and limitations of the proposed method, but that is not the case.\\n\\nA2. We would like to thank you again for the constructive suggestions. We have added more experimental details as your suggested. We add per-tasks comparisons to baseline models (i.e., DQN, A3C, Prioritized DQN, MFEC, NEC, EVA, EMDQN, and ERLAM) in Appendix Table 2. We stress that ERLAM does not perform significantly worse than baselines on StarGunner. Figure 3 shows scores of EMDQN with 40M samples while ERLAM uses only 10M samples. As shown in Figure 4, ERLAM significantly outperforms EMDQN on StarGunner when they both use 10M samples. As mentioned in A1 to Reviewer #1, our algorithm is good at improving the sample-efficiency in near-deterministic environments but may suffer from overestimation in highly stochastic environments, such as Tutankham. In addition, since representations learning is not the focus of this paper, we simply use the naive random projection as the state representations in memory. As discussed in response to Reviewer #3, random projection is only used for dimension reduction and does not contain useful high-level feature or knowledge (e.g., objects and relations). In some games with rare revisited states, there are not enough joint nodes in our graph and thus our algorithm does not perform well, such as FishingDerby and Jamesbond.\\n\\nQ3. Minor comments.\\n\\nA3. We have refined all the language you mentioned and improved the presentation in the revised paper as you suggested.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your thoughtful comments and constructive suggestions. In the following, we address the concerns point by point.\\n\\nQ1. Can ERLAM deal with a stochastic environment? It seems that ERLAM would more likely to over-estimate the values for the state than the existing episodic RL algorithms.\\n\\nA1. Episodic control mainly focuses on improving the sample-efficiency in near-deterministic environments [1,2,3]. In stochastic environments, both previous episodic RL algorithms and ERLAM may suffer from over-estimation during the maximization update of revisited states in memory. However, if one state-action pair has many different subsequent states, we will select the recently sampled one instead of the maximum to build the graph, which amounts to using the expectation of next states from a long term view and contributes to alleviate the over-estimation problem. Thus, ERLAM can work well in near-deterministic environments and our experiments also confirm this. In our experiments, although there are about 34% stochastic states in each game on average, ERLAM significantly outperforms the baselines, which suggests our algorithm is able to handle environments with some randomness. In addition, for a completely stochastic environment, our model can be extended by storing distribution of Q values [4,5] instead of the maximum Q value in the associative memory, which is an important direction for future work. We have added these descriptions in our revised paper and thank you again for the insightful question.\\n\\nQ2. In order to make join points, it should be possible to determine whether two features of states are equal or not. How was this determined? It seems rare to reach the 'exact' same feature in Atari domains (i.e. 4 consecutive frames should be equal).\\n\\nA2. We set a small threshold (i.e., 0.0000001) to determine whether two features of states are equal or not. We use the same threshold in all games. As reported in [1], Atari games have a sizeable percentage of states-action pairs that are exactly the same. For example, there are about 10% repeating states in Frostbite, 60% in Q*bert, 50% in Ms. PAC-MAN, 45% in Space Invaders, and 10% in River Raid. Actually, some states are not exactly the same but extremely similar (below our threshold) so they can also form joint points in our graph. In our experiments, a great number of joint states (about 1k on average) will be updated during each running of our value propagation algorithm.\\n\\nQ3. In Algorithm 2, the pseudo-code is somewhat confusing in that R_t is appended to G before the episode ends.\\n\\nA3. R_t is appended to G computed after each episode ends. We have modified Algorithm 2 in the revised paper as you suggested.\", \"references\": \"[1] Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J. Z., ... & Hassabis, D. (2016). Model-free episodic control. arXiv preprint arXiv:1606.04460.\\n[2] Pritzel, A., Uria, B., Srinivasan, S., Badia, A. P., Vinyals, O., Hassabis, D., ... & Blundell, C. (2017, August). Neural episodic control. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 2827-2836). JMLR. org.\\n[3] Gershman, S. J., & Daw, N. D. (2017). Reinforcement learning and episodic memory in humans and animals: an integrative framework. Annual review of psychology, 68, 101-128.\\n[4] Marc G Bellemare, Will Dabney, and R\\u00e9mi Munos. (2017). A distributional perspective on reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 449\\u2013458. JMLR. org.\\n[5] Will Dabney, Georg Ostrovski, David Silver, and Remi Munos. (2018) Implicit quantile networks for distributional reinforcement learning. In International Conference on Machine Learning, pages 1104\\u20131113.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper proposes Episode Reinforcement Learning with Associative Memory (ERLAM), which maintains a graph based on the state transitions (i.e. nodes correspond to states, and edges correspond to transitions) and propagates the values through the edges in the graph in the reverse order of each trajectory. The learned associative memory is then used for the regularization loss for training Q-network. Experimental results show that ERLAM significantly improves the sample efficiency in Atari benchmarks.\", \"Overall, the paper is well-motivated and easy to follow. The experimental results demonstrate that the proposed method is promising. For the states that are already visited, instead of simply replacing the value to the better return (i.e. Eq. (1)), ERLAM makes join points to connect different trajectories, which enables further improvement via Bellman optimality-like backup (i.e. Eq. (3)).\", \"Can ERLAM deal with a stochastic environment? It seems that ERLAM would more likely to over-estimate the values for the state than the existing episodic RL algorithms.\", \"In order to make join points, it should be possible to determine whether two features of states are equal or not. How was this determined? It seems rare to reach the 'exact' same feature in Atari domains (i.e. 4 consecutive frames should be equal).\", \"In Algorithm 2, the pseudo-code is somewhat confusing in that R_t is appended to G before the episode ends.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a new method for organizing episodic memory in with deep Q-networks. It organizes the memory as a graph in which nodes are internal representations of observed states and edges link state transitions. Additionally, nodes from different episodes are merged into a single node if they represent the same state, allowing for inter-episode value propagation of stored rewards. The authors experimentally evaluate the method against reasonable baselines and show improved performance in the majority of tasks.\\n\\nWhile the proposed method seemingly leads to better performance, the analysis of the results appears superficial. First, it lacks theoretical rigor to explain why the proposed propagation mechanism works, for instance, a proof of the optimal substructure in Eq. (3) would be helpful. While I see the authors mentioned previous work which do not have optimal substructures and mention this for the case of navigation-like tasks, the matter is not discussed and unclear in other scenarios. \\n\\nThis in itself would not be much of an issue if the experiments highlighted advantages and limitations of the proposed method, but that is not the case. For instance, per-task comparisons to EVA (mentioned in the paper) could indicate how useful inter-episode value propagation was in each task. Table 1 mentions EVA, but only on aggregated results, thus not providing insight into this. Furthermore, the experiments selected to be detailed in Figure 4 are, in my opinion, suboptimal choices, as the worst-performing of the selected tasks is still better than the baselines. It would be more insightful to also compare those whose performance is close to the baselines, such as the Boxing environment, and significantly worse than baselines, such as the StarGunner environment (as shown in Figure 3). With those issues in mind, my conclusion is to recommend to reject the paper in the current state.\", \"minor_comments\": [\"Thoroughly review the writing and grammar, the text in its current state needs significant improvement in this regard\", \"Equation references missing parentheses\", \"Introduction, 1st paragraph, second sentence is incomplete\", \"Algorithm 2: tuples taken from/appended to sets G and D are not consistent (cf. lines 11 and 13)\", \"Figure 3: The large amount of bars in the plot would benefit from horizontal lines across the plot for each tick on the y axis\", \"Table 1: caption before the table; should also state the meaning of numbers (percentage or normalized score, calculated from what?)\", \"Figure 4: labels (all text except plot titles) are impossible to read in print\", \"Section 3 could be placed before Section 2, laying the mathematical framework, and then following the discussion with related work\", \"There is redundant content in Sections 2 and 3\", \"----\", \"I am happy with the authors response and changed the score accordingly\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to combine DQN with a nonparametric estimate of the optimal Q function based on the graph of all observed transitions in the buffer. Specifically, they use the nonparametric estimate as a regularizer in the DQN loss. They show that this regularizer facilitates learning, and compare to other nonparametric approaches. I found the paper easy to read. The ideas are intuitive and seem to work.\\n\\nIt would be great to have more experiments providing insight into when the associative memory estimate works and when it doesn't. Since at the end of the day both DQN and the non-parametric estimate use the same data, there's no fundamental reason why the later should contain more information. Is it possible that more aggressive training of DQN would eliminate the need for the nonparametric estimate? Why would I expect the nonparametric estimate based on random projections to generalize better to new states than DQN? What would be the performance of DQN with only the random projections as inputs? I believe including experiments probing in this direction would make the paper better.\\n\\n-------------------------------------------------------------------------------------------------------------------\\nThanks for your response and the additional experiments. I still find the paper interesting and hence keeping my score as is.\"}"
]
} |
r1xo9grKPr | Flexible and Efficient Long-Range Planning Through Curious Exploration | [
"Aidan Curtis",
"Minjian Xin",
"Kevin Feigelis",
"Dan Yamins"
] | Identifying algorithms that flexibly and efficiently discover temporally-extended multi-phase plans is an essential next step for the advancement of robotics and model-based reinforcement learning. The core problem of long-range planning is finding an efficient way to search through the tree of possible action sequences — which, if left unchecked, grows exponentially with the length of the plan. Existing non-learned planning solutions from the Task and Motion Planning (TAMP) literature rely on the existence of logical descriptions for the effects and preconditions for actions. This constraint allows TAMP methods to efficiently reduce the tree search problem but limits their ability to generalize to unseen and complex physical environments. In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances. However, DRL methods have had trouble dealing with the very sparse reward landscapes inherent to long-range multi-step planning situations. Here, we propose the Curious Sample Planner (CSP), which fuses elements of TAMP and DRL by using a curiosity-guided sampling strategy to learn to efficiently explore the tree of action effects. We show that CSP can efficiently discover interesting and complex temporally-extended plans for solving a wide range of physically realistic 3D tasks. In contrast, standard DRL and random sampling methods often fail to solve these tasks at all or do so only with a huge and highly variable number of training samples. We explore the use of a variety of curiosity metrics with CSP and analyze the types of solutions that CSP discovers. Finally, we show that CSP supports task transfer so that the exploration policies learned during experience with one task can help improve efficiency on related tasks. | [
"Curiosity",
"Planning",
"Reinforcement Learning",
"Robotics",
"Exploration"
] | Reject | https://openreview.net/pdf?id=r1xo9grKPr | https://openreview.net/forum?id=r1xo9grKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sNKUHwtiSw",
"BkeEr613jB",
"rkekcIvosS",
"S1x6Hpeijr",
"Sye5LrlijB",
"S1xCZegiiB",
"SJx0n4p5jr",
"B1lo-SNciS",
"BkxfnUM9iH",
"rJgjoXfqoH",
"Hklo5ff5jB",
"B1gCTrWMiB",
"S1xenrWGoS",
"SkeVXHWfjr",
"rklSCEZMiS",
"rJemwNbGjH",
"BkggoJ_gir",
"ryeLTXvkiS",
"BkxtxbUJqH",
"Syl0y5O6Fr",
"S1gqiLITFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750125,
1573809468458,
1573774982518,
1573748037089,
1573746001806,
1573744645707,
1573733557676,
1573696770944,
1573689002183,
1573688227196,
1573687955035,
1573160390371,
1573160360512,
1573160219553,
1573160140691,
1573160026762,
1573056408187,
1572987837582,
1571934449017,
1571813862274,
1571804834353
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"~Caelan_Reed_Garrett1"
],
[
"ICLR.cc/2020/Conference/Paper2484/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2484/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors consider planning problems with sparse rewards.\\nThey propose an algorithm that performs planning based on an auxiliary reward \\ngiven by a curiosity score. \\nThey test they approach on a range of tasks in simulated robotics environments \\nand compare to model-free baselines. \\n \\nThe reviewers mainly criticize the lack of competitive baselines; it comes as now \\nsurprise that the baselines presented in the paper do not perform well, as they \\nmake use of strictly less information of the problem. \\nThe authors were very active in the rebuttal period, however eventually did not \\nfully manage to address the points raised by the reviewers. \\n \\nAlthough the paper proposes an interesting approach, I think this paper is below \\nacceptance threshold. \\nThe experimental results lack baselines, \\nFurthermore, critical details of the algorithm are missing / hard to find.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"RL solves a different problems; I don't know what a good base line would be, but RL methods are not\", \"comment\": \"Maybe my view on the matter is too much that of an outsider. But still, it appears that dynamic programming (and appropriate approximations from the literature) should be applicable. Maybe the TAMP community has deemed these approaches insufficient, but the targeted community (i.e. not robot motion planning but representation learning) is probably not aware of this\\u2013or at least I and many of my peers are not.\\n\\nInstead, you compare to RL methods which are designed to do different things. More concretely, they are designed to work in situations where the dynamics and cost function are *not* given explicitly. Hence, these baselines appear as straw men.\\n\\nHow does an approach straight from a text book fail in these settings? Where can I learn about these things? If relevant articles are not cited (I might have overlooked them), I have now way of evaluating your claims. Claiming that these results are \\\"well known\\\" is certainly not true for the ICLR community. It is also something that is not part of all recent textbooks. I don't think you can expect all your readers to be aware of this, or you have to target a different community.\"}",
"{\"title\": \"follow up on Dyna\", \"comment\": \"Upon thinking about your suggestion about Dyna further (the co-authors have been going through this extensively over the past few hours), what we\\u2019ve realized is that we have built is, at least arguably, a sort of Dyna-like model, but in which:\\n\\n (a) the data-structure of our replay buffer is a tree, and,\\n\\n (b) priority is generated from the learned curiosity-based score.\\n\\n (c) the dynamics are perfect (for now; obviously we want to change this later)\\n\\nThat is, the baseline you described in your last message may not really be so much an \\u201calternative\\u201d separate baseline, but actually a loose re-description of our approach. Probably whether you\\u2019d consider this to be the case or not depends on what you think is \\u201ccore\\u201d to dyna.\\n\\nAt a higher level, we now realize that one can rephrase our contribution as linking several different research areas together -- that is, TAMP and model-based RL -- by building a model of hybrid kind that shares some features of both. And we\\u2019ve shown how this hybrid approach is very effective for a wide class of hard but very important and unsolved planning problems.\\n\\nWe just hadn\\u2019t been thinking about it exactly in those terms ourselves until now, as we had been more approaching the problem from the point of view of increasing the flexibility of TAMP with learning rather than adding planning to a reinforcement learning problem.\\n\\nDo you agree that this is a fair description?\\n\\n--> If so, maybe what we can do right now easily is add to the paper by explaining this connection clearly. In that case, might it not be fair to change score? And publishing the work close to as-is, since in this case, it\\u2019s not really that an important totally different baseline has not be run, but rather that a connection to the literature needed to be explained?\\n\\n--> If not, it would be super helpful to us for future work if you\\u2019d be able to comment on which specific differences we should be focusing on to test, as distinct baselines and providing citations for where this work has been tested before.\"}",
"{\"title\": \"Useful discussion\", \"comment\": \"I do regret that this came up so late during the reviewing process. However, don't feel discouraged about this. Even if it's too late to address this issue in time for the decision deadline, I think what we have discussed here will be very useful for future submissions. I see the value of your work and I consider it a meaningful contribution, so do continue working on it. Given how receptive you have been about our feedback, I am confident that this work will eventually be published in one of the major conference in our field.\\n\\nIn the meantime, I've updated my official rating, but I still consider that the paper is still not ready for publication.\"}",
"{\"title\": \"reply to about proper baselines\", \"comment\": \"Ach, yeah, you're totally right. In trying to figure out what the proper baselines where, we went through a thought process exactly like this: are standard Deep RL algorithms so doomed that it isn't even a fair comparison? In a way, what we've done with our controls ends up being kinda of a pedagogical exercise making this point.\\n\\nAnd as you say, \\\"Even when using the curiosity score to guide exploration, curiosity would not necessarily guide the algorithm towards states that are closed to the goal.\\\" -- which we've shown a bit with our PPO-RND (and related) baselines. \\n\\nSo agreed, these baselines are doomed for these tasks essentially, but they do represent the recent standards for what one might do with deep RL (especially something like PPO-RND). We probably at least had to make some gesture toward these comparisons. \\n\\nBut a really proper baseline would have to be like what you're suggesting. It's possible that it might not even really be a baseline, but instead a potentially reasonable alternative model. It's probably too late for us to do anything about it now (sorry!) especially since this would probably involve fairly nontrivial new implementations and testing, and implementing something like dyna-Q for our situation isn't exactly an off-the-shelf solution, at least not in the high-dimensional continuous setting. But yeah, we totally agree that's the right direction for exploring strong model comparisons. We kind of wish we had had this in mind 10 days ago... Maybe something like this can be done for a final version of the paper.\"}",
"{\"title\": \"More about the value of gamma and the task specification\", \"comment\": \"I understand that CSP does not require a high value of gamma because it is maximizing the novelty score from the curiosity model. Moreover, since states are sampled from the search tree based on their curiosity score, I can see why using a low value of gamma would not have a big impact on performance.\\n\\nI also see how, given the task specification, the value of gamma would not have a huge effect on the performance of Vanilla A2C and Vanilla PPO. After all, the task consists of solving a very complex problem with no feedback about how well you're performing until the task is completed. In RL this type of tasks would take several episodes of training before an agent achieves a reasonable performance. That's why it seems to me that it is hopeless for Vanilla A2C and PPO to succeed at this task. Even when using the curiosity score to guide exploration, curiosity would not necessarily guide the algorithm towards states that are closed to the goal. Thus, I don't think these are the appropriate baselines to compare against. \\n\\nI think a good baseline to compare against would be a model-based deep RL method with access to the simulator (the perfect model) and a curiosity module to prioritized states sampled from the model. This would be something akin to the Dyna architecture with prioritized sweeping based on the curiosity score of each state, see Sutton & Barto (2018) for more information.\\n\\n=== References === \\nSutton, R. S., Barto, A. G. (2018 ). Reinforcement Learning: An Introduction. The MIT Press.\"}",
"{\"title\": \"Hyperparameter Choice Clarification\", \"comment\": \"You are correct that in most reinforcement learning problems with sparse reward, gamma is usually between 0.9 and 0.99. However, we chose gamma to be zero for a very specific reason.\", \"there_are_essentially_two_cases\": \"those in which we use curiosity and those not (the \\\"vanilla\\\" cases).\\n\\nIn the case with curiosity, although the original goal is very sparse in the state space, the reward given to the reinforcement learning algorithm becomes *not at all* sparse, since it is the loss from the curiosity module. The intrinsic motivation has filled in what *was* a sparse reward into something quite dense. This loss creates a reward gradient in the direction of increasing uncertainty. While we started our experimentation with the default gamma of 0.95, we actually found higher performance in CSP transfer for gamma=0, and no change in single-task performance. \\n\\nIn the cases of Vanilla PPO and A2C, where there is no intrinsic motivation and the reward is truly very sparse, something totally different (and almost opposite!) is going on to make gamma=0 \\\"reasonable\\\". If you think about what gamma actually does, it only matters at all if the RL agent receives *at least one positive sample of achieving reward*. But because of the incredibly low probability of finding a reward at all through non-directly sampling at the beginning of the RL learning process, the vanilla agents *never* even get one instance of reward througout the whole episode. So gamma ends up being totally irrelevant. If we had waited billions and billions of episodes, perhaps for some of the tasks the agent woul have received some reward, and then of course, gamma would have started to matter. But that would have defeated the point of the work to begin with (e.g. *not* needing billions of steps to figure these problems out). This is kind of a restatement of the underlying reason why for multi-step deep reinforcement learning, even when trained on auxiliary inputs, is incapable of solving long-horizon multi-step planning tasks with very sparse goals.\\n\\nSorry, we realize in retrospect that this is fairly subtle and requires explaining. We have added these clarifications to section B.1 of the paper.\\n\\nRegarding your other suggestions, we have added a table in the supplement with the additional experiments, fixed the typo that you found, and restructured the supplement, removing section D and transferring all of the hyperparameters in that section into their respective architecture sections.\\nThe supplement now contains 4 sections which describe curiosity types, architectures, experiment details, and additional baselines. If there\\u2019s anything about the structure that you feel is still confusing, please let us know.\\n\\nThank you again for your helpful feedback.\"}",
"{\"title\": \"Further clarification\", \"comment\": \"Thanks again for all the effort you are putting into addressing our comments. The submission is already a lot stronger than the original one. However, I still have a few more concerns.\\n\\nFirst, could you provide some details about how the hyperparameters were selected? It seems unreasonable to me to use a value of zero for gamma in Vanilla PPO and Vanilla A2C because this would make the algorithms very myopic in an environment with very sparse rewards. Thus, it seems hopeless that either of these two algorithms would succeed in most of this tasks, which explains their performance. If this was the only value of gamma used for Vanilla PPO and Vanilla A2C, then I don't think this is a fair comparison. \\n\\nSecond, could plots and tables be provided for the performance of the other baselines algorithms in appendix E?\\n\\nFinally, the organization in the appendices is slightly odd. I think the information would be easier to follow each method section in the appendix included all the details about the method, i.e., hyperparameters, loss, and architecture. Moreover, Appendix D.1 seems to end mid sentence.\"}",
"{\"title\": \"quick follow up\", \"comment\": \"Thanks for the reply!\\n\\nWe'll work to improve the clarify of the discussion in the paper about the motivation. \\n\\nAlso, if you find anything missing in the supplement that you feel is important, just let us know -- we're happy to add / clarify as needed to make what we've done transparent.\"}",
"{\"title\": \"Decision after reading all the reviews and comment\", \"comment\": \"Thank you for your thorough replies and for addressing our concerns. I am satisfied with the your rebuttal and very please about how receptive you were about all of our feedback. I'd be happy to change my score to an accept as soon as I've verified that the new version of the paper in fact addresses our concerns about reproducibility.\\n\\nI would like to add that it seems that the motivation of the paper was lost in all of us, so this could be an area where the paper could be greatly improved.\"}",
"{\"title\": \"Updated Revision\", \"comment\": \"General comment:\\n*To address issues regarding reproducibility and implementation details, we have provided open-sourced all the code behind our project at the following anonymized Github repo: https://github.com/CuriousSamplePlanner/CuriousSamplePlanner\", \"reviewer_1\": [\"We have addressed questions of the novelty metric used in the generation of L_\\\\phi in the supplement where we review the details and loss function of each curiosity type.\", \"We have addressed questions about the action space in section C.6 of the supplement where we describe the exact composition of the action space used in training the curiosity networks and action selection networks.\", \"We have added details about the exact input and output of each curiosity metric (Forward dynamics, state estimation, random network distillation) in section A of the supplement.\", \"We have added information regarding network structure in section B of the supplement and learning rate/additional hyperparameters in section D of the supplement.\", \"We have addressed the number of perspectives (one) used in calculating the state-estimation curiosity heuristic in section A.1\", \"We have run an additional ablation on the action selection network and reported our results in the main body of the paper. R1 requested that we run an ablation on the feasibility aspect of the action selection networks. We decided to go even further and run an ablation on the entire action selection network. The reason we did this is that in originally designing our architecture, we started without having the action selection network. This seemed like it would be sufficient for solving individual tasks. But we felt that having it would be useful for transfer to new tasks since it would enable task-independent learning. Our new ablation shows that, while the action selection networks can be removed without dramatically disrupting the initial planning process, it is indeed vital for task transfer.\", \"We fixed all five of the mentioned typos in the main body of the paper\", \"Reviewer 2\", \"We followed up with reviewer 2 about the key contribution of our paper: we have built a TAMP-like multi-step planning algorithm that uses deep-learning-based curiosity to enable flexible application, as opposed to having to explicitly specify action effects/preconditions and logical predicates as in traditional TAMP\", \"We addressed our absence of comparisons to task and motion planning algorithms in followup comments with reviewer 2.\", \"Following the advice of reviewer 2, we have added a few citations to the introduction and included some of the mentioned research in the related work section.\", \"Reviewer 3\", \"We have added all needed implementation details, including the hyperparameters used for A2C and PPO implementation in the supplement of the paper.\", \"We have discussed in responses to reviewers 2 and 3 why it wasn\\u2019t obviously feasible to make a direct quantitative comparison to traditional task and motion planning algorithms.\", \"We discussed with reviewer 3 our choice of actor-critic architecture, activation functions, and comparisons.\", \"We performed some additional baseline comparisons with A2C/PPO on the other curiosity metrics, finding similar and expected results (section E of supplement and referenced in the main text).\", \"We discussed macro-action/option discovery and what form this may take in future work.\"]}",
"{\"title\": \"follow up on runtime question\", \"comment\": \"\\\"... the ultimate metric we would like to minimize is runtime; .... How long does planning typically take for the problems you consider?\\\"\\n\\nThe time it takes to solve a problem is dependent on many factors such as simulator speed, hardware choices, and the difficulty of the problems being solved. As such, comparing timing between projects which vary on all three seems rather difficult. CSP solves some of the simpler problems in a matter of minutes and with others it takes much longer. We totally agree that runtime is a useful metric, but really will be hard to compare to other things on this unless we're using similar setups and solving similar problems. Once we start applying CSP in the real world, rather than just in simulation, this becomes an important metric, one we hope to report in future work.\"}",
"{\"title\": \"following up on literature comparisons / suggestions and clarifying our core contribution\", \"comment\": \"Thanks for the constructive comment! See below for inline comments.\\n\\n\\\"There are many planning algorithms for multi-step manipulation problems that do not operate on a discrete abstraction of the domain (what you mean when you say action models with preconditions & effects).\\\"\\n\\nActually there might be a misunderstanding here. There's nothing specific about discrete vs continuous here, and working in a discrete setting is not at all what we mean by issue of \\\"logically specified preconditions and effects\\\". Perhaps you think we are claiming that no continuous action and state space planners exist that solve problems similar to ours. That's not what we are saying. There are obviously many such planners. Instead, what we *are* claiming that while some planners work in continuous action and state spaces, they are made possible and efficient using logical predicates as well as action effects and preconditions that limit the flexibility of the system. Our system forgoes these explicit logical definitions and makes the task possible and efficient via another means, namely curiosity.\\n\\n\\n\\\"The approach of Toussaint, M. et al. is particularly relevant as they also consider tool-use problems. The code from their paper is publicly available: https://github.com/MarcToussaint/18-RSS-PhysicalManipulation.\\\"\\n\\nWe're very familiar with that work. That paper is great and is a nice advice. However, it isn't an example of something that already resolves the main issue CSP solves. Like all TAMP work we're familiar with, the Toussaint approach requires the logical specification of action conditions and effects. Specifically, the authors say that:\\n\\n \\\"[We] restrict the solutions to a sequence of modes; consider these as action primitives and explicitly describe the kinematic and dynamic constraints of such modes. This drastically reduces the frequency of contact switches or kinematic switches to search over, and thereby the depth of the logic search problem. It also introduces a symbolic action level following the standard TAMP approach, but grounds these actions to be modes w.r.t. the fundamental underlying hybrid dynamics of contacts.\\u201d\\n\\nIt's also really easy to see from their code how this restriction arises. For example:\\n\\nAction Effects/Preconditions: https://github.com/MarcToussaint/18-RSS-PhysicalManipulation/blob/master/demo/fol.g#L89\", \"predictions\": \"https://github.com/MarcToussaint/18-RSS-PhysicalManipulation/blob/master/demo/fol.g#L210\\n\\nYou can see how in those places they require the definition of operators effects using standard logical form needed for TAMP. It's exactly this sort of requirement that makes TAMP hard to apply flexibly. Applying the Toussaint algorithm for us would require specific by-hand tuning. That would obviate the whole point of \\\"flexible\\\" planning in the first place.\\n\\nThe key difference with CSP is that unlike TAMP (and the Toussaint work), we *do not* use specially crafted problem-specific constraints to guide our search, but instead reduce the size of the search tree by using novelty as a guiding heuristic. This is way CSP is so much more flexible.\", \"you_also_mention_some_additional_literature\": \"--> Garrett, C. R. et al. (2017).\\n\\nThis work falls into the same category as the Toussaint work because it encodes the preconditions and effects of actions into it\\u2019s problem-specific \\u201cconstraint network\\u201d.\\n\\n--> Barry, J et al. (2013) \\n and \\n--> Hauser, K. et al. (2011) \\n\\nThese works are solutions for multi-modal motion planning problems which is not the problem we are attempting to solve.\\n\\n\\n--> Vega-Brown, W. (2016) \\n\\nThis work *IS* actually trying to remove the explicit logical definition from planning and they provide code, and as such we're familiar with it. But it's really for a very different purpose that CSP. First, it operates on motion primitives, so it's more like motion planning algorithm with a few non-analytic differential constraints. Second, it was tested on a single fairly simple 2D task with very limited action and state spaces, and it took up to 3 hours to solve that task. For these reasons, the approach is very likely fail to solve any of the problems (even the three-stack task). To be fair, of course, the authors of that work didn't claim that their work was a solution to the kind of complex multi-step planning problem we address. \\n\\nGiven the complexity of the algorithm, it would likely be very difficult for us to get a working implementation during this review process. (They provide Python code but only for a very different 2D setting, so adapting it would be a very substantial effort.) However, it might be possible. But also given how unlikely it is to actually even solve the simplest of our multi-step tasks, it doesn't seem obvious this would be effort well spent. \\n\\n==> Question: What do you think?\"}",
"{\"title\": \"Rev 3. follow up on TAMP comparison\", \"comment\": \"Following up in more detail now for this question from Rev3:\\n\\n\\\"... There are no comparisons to any previously proposed Task and Motion Planning Methods. What motivated this decision?\\\"\\n\\nThe answer is essentially the same as we gave to Rev 2, who is actually in a way asking a similar question. The basic answer is TAMP methods basically all requir by-hand specification of the logical definition for action effects and preconditions. This is the essential reason why TAMP is not that flexible: creating such specifications is a rather laborious and situation-specific process. You can think of the main raison d'etre of CSP as to get around this problem, using learning (while still retaining the overall efficiency properties of TAMP-based, rather than Deep-RL based, solutions). \\n\\nThe problem we'd face if we wanted to implement a TAMP comparisons is: how would we do so without having to make situation-specific logical action effect and precondition specifications? If we knew how to do that within the traditional TAMP framework, of course we'd do that as an important baseline -- but then that would have solved the problem that CSP is for in the first place! \\n\\nIn fact, this problem is essentially the same reason why people in the TAMP community itself don't really do quantitative comparisons between systems in their papers. This is kinda strange sounding for AI/ML community, but is (unfortunately) natural once you realize what the flexibility limitations of TAMP actually are. \\n\\nAs we were learning about this field, we found a good youtube video we found which contextualizes this issue: https://youtu.be/wRZ2yqRrPiY?t=4342\"}",
"{\"title\": \"Initial response to Reviewer 3\", \"comment\": \"Hi Reviewer 3!\\n\\nThanks also for the constructive comments. See below for inline responses to each issue / question. \\n\\n\\\"The paper represents a significant contribution ...\\\"\\n\\nThanks!\\n\\n\\n\\\"However, I have concerns about the reproducibility of the results ... but I am willing to increase my score if my comments are properly addressed.\\\"\\n\\nGot it. We weren't that sure how much detail to put into the paper, but we see this is a real area for improvement. (Rev 1 also is concerned about this issue.) As we explained to Rev. 1, our plan for addressing this is as follows: \\n\\n\\t1. Revise the main text of the paper to clarify several main important issues.\\n\\t2. Create a detailed additional supplementary document. \\n\\t3. Post an anonymous public github repo to which we will commit all the project code.\\n\\t\\nOur goal is to have this ready for your review by 11/11 or 11/12. \\n\\n==> Question: does this plan work for you? Other suggestions welcomed!\\n\\n\\n\\\"... the paper should include [LIST OF SPECIFICS]\\\"\\n\\nHappy to add these things. Some we'll put into the main paper and some into the supplement. When done we'll write a comment pointing you to the edits. \\n\\n\\n\\\"... There are no comparisons to any previously proposed Task and Motion Planning Methods. What motivated this decision?\\\"\\n\\nGreat question. Will address in a separate post due to character limitation. \\n\\n\\n\\\"... An alternative to using separate networks for the policy and the value function, it could be possible to use a two-headed network with one head for the policy and another one for the value. What was the reason for using two separate networks over this alternative?\\\"\\n\\nThat is indeed an alternative to using separate networks for the policy and value function. The short answer is that we adopted our architecture from a widely used reinforcement learning codebase. https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/model.py#L208\\n\\n\\n\\\"... Were any other alternatives tested for the activation functions of the network?\\\"\\n\\nNo, we haven't done this. Do you think this is very important for us to try? It's sort of not been a major area of exploration in the literature we've looked at, so we have the vague impression that activation function variants aren't too likely to make a huge different. But we'd be happy to try something specific out if you point us to it, it would be great to see if there could be improvements made in such a simple way. \\n\\n\\n\\\"CSP was tested with three different curiosity and novelty metrics, none of which dominated over all the other ones. However, PPO was only tested with one of the measures and A2C had no curiosity measure added to it. Where there any preliminary results that justified this decision? In terms of computation and time, how difficult would it be to include this in the paper?\\\"\\n\\nFair point. The reason for what we did is this: once we found PPO to be better than A2C in our context and that RND was if not always the best, at least overall the best curiosity metric, we figured that PPO+RND was thus the overall strongest control of this type. And that very likely the other combinations would just be less powerful. However, it is easy to include several other combinations, and we will be happy to do this in the revised paper. \\n\\n\\\"The paper already hinted at this, but macro-actions could be framed within the option framework from Sutton, Precup, & Singh (1999). .... Could the authors provide more comments about this line of future work?\\\"\\n\\nYeah, this definitely seems like a natural direction for further improvement. The key goal is really to get rid of having to build in macro actions. It seems like CSP might be a natural way to do that, and plugging it into the options framework is one clear way forward toward that. The overall concept of hierarchical planning is something would generally be relevant for us to try to connect to, and it seems natural to think that a hierarchical, and possibly curricularized, version of CSP could be substantially more powerful that the current \\\"single-level\\\" version.\"}",
"{\"title\": \"First response to Review 1\", \"comment\": \"Hi Reviewer 1!\\n\\nThanks for your constructive feedback. See below for inline replies to each major item. \\n\\n...\\n\\n\\\"Overall, this paper makes a significant contribution to improving robot learning of long-horizon, sparse reward tasks. The paper is clearly written and well-motivated, and the evaluations are thorough.... It is a step in the right direction, and clearly outperforms vanilla deep RL.\\\"\\n\\nThanks!\\n\\n\\\"... The major downside is that CSP inherently requires knowing the dynamics of the environment (i.e., having a simulator), which means it cannot be directly run applied to real-world robotic systems.\\\"\\n\\nAbsolutely. As you probably realized, in this world we've taken a step-by-step strategy: first try to get the planning working *assuming* there is good forward predictor, so that in the next phase of the work we can relax that condition. Our next major step is to apply the method in the context of a non-deterministic learned forward predictor (and in fact we hope to show that having a good planning algorithm in place makes the learning of the forward predictor substantially more efficient). As you note, doing this is really important for actually being able to apply to our work in the real world. \\n\\n\\n\\\"... First, there are not enough details included for reproducibility (see list below).\\\"\\n\\nYes, both you and Reviewer 3 had this concern, and its totally reasonable. We'll reply to the details of your concerns below, but here are the three main high-level things we're going to do in response:\\n\\n\\t1. Revise the main text of the paper clarifying the several main important issues for which you and Rev 3 have asked. \\n\\t2. Create an additional supplementary document with copious detailed information about each aspect of the project. \\n\\t3. Make a public github account/repo to which we will commit all the project code, so that you (and others) can have access to it during (and after) this review process. We should have done this before but somehow got worried about breaking the double-blindness of the review. However, upon thinking about it, we realize it should be easy enough to create an anonymous Github user account to post the code to for review purposes. \\n\\t\\nOur plan is to have this ready for your review by 11/11 or 11/12. That will give you a few days to ask for further clarifications. \\n\\n==> Question: does this plan work for you? Do you think it's enough time to address the issues?\\n\\n\\n\\\"... With regard to evaluation, I think another ablation should be run, where the action selection network is trained for only feasibility. This would be similar to CSP-No Curiosity, but with a reward of 0 for infeasible actions, and a small fixed positive reward for feasible ones. This would more clearly answer the question of how much it matters to include the curiosity module.\\\"\\n\\nJust a point of clarification, removing the curiosity feedback into the action selection networks is not the same as CSP-No Curiosity. In fact, most of the benefit of curiosity comes from selecting which nodes to add to the search tree and the frequency with which to sample those nodes. This operation is independent of the action-selection networks. A more informative ablation might be to run CSP without feasibility feedback. Comparing this result with CSP-No Curiosity would isolate the contribution of the curiosity feedback signal independent of feasibility. Is this an ablation you would be interested in seeing?\\n\\n\\n\\\"... Finally, I'm not convinced that this approach works well for transfer, and the evaluations seem inconclusive as well. I'm surprised that even for inter-task transfer, agents trained with action selection transfer and full transfer don't just learn to solve the task immediately. Am I missing something here about how the task is instantiated?\\\"\\n\\nTransfer is tested on different initial instantiations of the general problem statement. So for example in 3-stack, the blocks are placed in different starting positions. It's important to note that we are not training a policy to solve these tasks, since a valid and general policy may not be learnable from a single example. The reason for showing between-task transfer was only to show that there is some increase in efficiency from having solved the tasks before. I'm also curious to know how much the log scaling on the y axis contributed to your interpretation of the results as being inconclusive. (In some cases the efficient gain was substantial even when the log-scale made it look small.)\\n\\n\\\"Reproducibility questions: [LIST OF SPECIFICS]\\\"\\n\\nOk, we'll make sure to add information about all these things in either the revised main text or the supplement. Once we post these, we'll write another comment pointing you to where the answers have been added. \\n\\n\\n\\\"Minor comments / typos:... [LIST OF SPECIFICS]\\\"\\n\\nThanks for catching these. All typos have now been fixed and will be uploaded along with other revisions.\"}",
"{\"title\": \"Additional Task and Motion Planning References\", \"comment\": \"There are many planning algorithms for multi-step manipulation problems that do not operate on a discrete abstraction of the domain (what you mean when you say action models with preconditions & effects). Instead, these algorithms search directly in the hybrid state space composed of discrete and continuous variables.\\n\\nThe approach of Toussaint, M. et al. is particularly relevant as they also consider tool-use problems. The code from their paper is publicly available: https://github.com/MarcToussaint/18-RSS-PhysicalManipulation.\\n\\nFrom a satisficing planning point-of-view, the ultimate metric we would like to minimize is runtime; however, your paper only reports the number of samples required. Many existing TAMP algorithms can solve similar problem instances in only a few minutes. How long does planning typically take for the problems you consider?\\n\\n--------------------------------------------------\\n\\nHauser, K. and Ng-Thow-Hing, V. (2011) \\u201cRandomized multi-modal motion planning for a humanoid robot manipulation task,\\u201d International Journal of Robotics Research (IJRR). Springer, 30(6), pp. 676\\u2013698. Available at: http://journals.sagepub.com/doi/abs/10.1177/0278364910386985.\\n\\nBarry, J., Kaelbling, L. P. and Lozano-P\\u00e9rez, T. (2013) \\u201cA hierarchical approach to manipulation with diverse actions,\\u201d in Robotics and Automation (ICRA), 2013 IEEE International Conference on, pp. 1799\\u20131806. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.365.1060.\\n\\nVega-Brown, W. and Roy, N. (2016) \\u201cAsymptotically optimal planning under piecewise-analytic constraints,\\u201d in Workshop on the Algorithmic Foundations of Robotics (WAFR). Available at: http://www.wafr.org/papers/WAFR_2016_paper_11.pdf.\\n\\nGarrett, C. R., Lozano-P\\u00e9rez, T. and Kaelbling, L. P. L. P. (2017) \\u201cSampling-based methods for factored task and motion planning,\\u201d in The International Journal of Robotics Research. doi: 10.1177/0278364918802962.\\n\\nToussaint, M. et al. (2018) \\u201cDifferentiable physics and stable modes for tool-use and manipulation planning,\\u201d Proc. of Robotics Science & Systems.\"}",
"{\"title\": \"What are your suggestions for planning algorithms for us to compare to?\", \"comment\": \"Hi Reviewer 2!\\n\\n\\nWe will get into details of replying to all your comments very soon. But, one thing we\\u2019re hoping to get your thoughts on as soon as possible is about the comment in the last paragraph of your review.\", \"you_say\": \"\\u201cPlanning algorithms that are tailored towards huge spaces should be used as base lines instead of deep RL methods.\\u201d\\n\\nCould you make a suggestion of what specific comparisons you think we should be using as baselines?\\n\\nHere\\u2019s the main cause of uncertainty we have in addressing your comment: Are you asking us to compare to standard TAMP solutions? There\\u2019s a fundamental reason we didn\\u2019t do this. We didn\\u2019t end up making any comparisons with TAMP algorithms since all the ones we know about require the user to pre-define macro-action effects, preconditions, and predicates. This makes them quite challenging to apply in the context of the highly flexible setup we are working in. That's because to figure out the effects and preconditions in a way that can be characterized with formal logic (a basic requirement of TAMP), you kind of have to do it laboriously and specifically for each new robotic setup and macro-action set, and tailor it to the specific objects / structures in the environment. \\n\\nThis lack of flexibility in TAMP is the whole raison d\\u2019etre of our paper. (That\\u2019s why we call it \\u201cflexible\\u201d.) If it were easily possible to apply TAMP directly as a baseline comparison, that would have obviated the point of our work. It was essentially that we couldn\\u2019t really flexibly do this that inspired our work. We\\u2019re definitely open to suggestions about how to make this direct comparison, but just not sure how to make sense of it at the moment, given the limitations of existing TAMP solutions that we know about.\\n\\nMaybe part of the problem here is that we weren't successful in communicating to you our main contribution. The main technical problem motivating our work is that standard TAMP (and other planning) solutions are efficient, but hard to apply easily in a flexible context, because they typically require the pre-specification of conditions and actions. You can view our main contribution as using a specific targeted approach to intrinsically-motived learning to find an efficient but still flexible route around this problem. Our work shows how one can *still* benefit from the advantages of ideas from planning to achieve efficiency (in a way that standard Deep RL does not).\", \"an_alternative_possibility\": \"are you referring to other planning solutions like RRT* or KPIECE that don't require the pre-definition of action effects and preconditions? There are many motion planning algorithms like these, but these algorithms have been shown to utterly fail in real-world environments which have many complex differential constraints. (This is why most TAMP papers don't even compare to pure motion planning anymore. So we didn't think rehashing this well-known point was super-helpful.)\\n\\nBottom line though is that we\\u2019d be very happy try something out as a new baseline if you make a specific suggestion, especially if the system you suggest has available code we could run in our setting -- but we figure we\\u2019d better ask right now to make sure we have the time to do this.\\n\\nOr if what you're saying is we need to explain better the thoughts above in the paper, we'd be happy to do that. What do you think?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tackles the problem of enabling robots to learn long-horizon, sparse-reward tasks. The proposed approach, the Curious Sample Planner (CSP), builds on insights in task and motion planning (TAMP), which is a standard approach for tackling these kinds of tasks. TAMP constructs a plan in the space of macro-actions (e.g., move object 1 to location (x,y)), and uses a motion planner to execute each macro-action. However, TAMP typically requires being able to describe macro-action effects and preconditions with logical predicates, which can be impossible in real-world environments, due to complex dynamics and interactions. CSP overcomes this limitation by planning in the space of macro-actions in a way that is biased toward novelty.\\n\\nThe core approach of CSP is to train a (macro-)action selection network to generate macro-actions that are both feasible and novel. The curiosity module is used in two ways: (1) to give reward to the action selection network for producing novel macro-actions, and (2) to expand states that are considered novel. Three state-of-the-art ways of computing novelty are compared -- state estimation (SE), forward dynamics (FD), and random network distillation (RND).\\n\\nCSP is evaluated on a suite of simulated robotics tasks that require the robot to build simple machines from the objects in its environment, in order to achieve the specified objective. The experiments compare agents trained with CSP versus with deep RL (specifically, A2C, PPO, and PPO + RND). There is an ablation study, that compares against planning with uniform selection of macro-actions and uniform selection of states to expand.\\n\\nOverall, this paper makes a significant contribution to improving robot learning of long-horizon, sparse reward tasks. The paper is clearly written and well-motivated, and the evaluations are thorough. The major downside is that CSP inherently requires knowing the dynamics of the environment (i.e., having a simulator), which means it cannot be directly run applied to real-world robotic systems. But it is a step in the right direction, and clearly outperforms vanilla deep RL.\\n\\nI'm leaning toward accept, but I have a few concerns / questions about the paper. First, there are not enough details included for reproducibility (see list below). With regard to evaluation, I think another ablation should be run, where the action selection network is trained for only feasibility. This would be similar to CSP-No Curiosity, but with a reward of 0 for infeasible actions, and a small fixed positive reward for feasible ones. This would more clearly answer the question of how much it matters to include the curiosity module. Finally, I'm not convinced that this approach works well for transfer, and the evaluations seem inconclusive as well. I'm surprised that even for inter-task transfer, agents trained with action selection transfer and full transfer don't just learn to solve the task immediately. Am I missing something here about how the task is instantiated?\", \"reproducibility_questions\": [\"In Algorithm 1, what are the inputs to the novelty metric that is used to compute L_\\\\phi? Is it the batch of next states, S'?\", \"What is the form of the output of the action-selection network? And what exactly is the space of macro-actions? For instance, the number of possible RemoveConstraint macro-actions depends on how many objects are connected in the environment. But the dimension of the action-selection network's output must be fixed.\", \"What does the state vector input for FD and RND contain? Along these lines, why not also use image inputs for FD and RND, as is done for SE?\", \"What are the learning hyperparamters used to train the networks? (e.g., learning rate)\", \"How many perspectives (i.e., n_p) are used for the SE curiosity module?\", \"Minor comments / typos:\", \"Avoid using the same variable with different meanings, e.g. using \\\\phi to indicate both the curiosity module and the parameters of the value network.\", \"Page 3: \\\"flexible\\\", \\\"a flexible\\\"\", \"Page 5: \\\"learnabe\\\" --> \\\"learnable\\\"\", \"Page 8: \\\"in which\\\" --> \\\"in which the\\\"\", \"Page 9: \\\"illustrate\\\" --> \\\"illustrated\\\"\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The idea of the paper is to augment a planner with a curiosity module to reduce the number of traversed paths, resulting in a speedup. The paper presents experiments where it is shown that deep RL methods are outperformed in that question.\\n\\nI recommend to reject the paper.\\n\\nThe reason for this are threefold.\\n- The paper is crowded with text and ends up to be hard to follow. The actual contribution is hard to distill.\\n- The method compares to deep RL baselines. But these are *not* planning algorithms, instead these are RL methods. The paper does not compare to planners, which are tailored to solve the problem the paper adresses.\\n- The reader is left alone to place the work within the literature. The abstract and introduction do not have a single cite; terms like TAMP and multi-step planning are mentioned and certain properties of them are stated without resorting to where a reader could look those up. It is not the readers task to use a search engine to reverse engineer the authors writing!\\n\\nI think that the authors need to reformulate what the contribution of their work is. That needs to be presented in a more abstract way, without focusing on the experimental setting prematurely. The experimental setup is ok if the method is restricted to robotic tasks, but too thin for the general setting of efficient planning with sparse costs. Planning algorithms that are tailored towards huge spaces shou;d be used as base lines instead of deep RL methods.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"===== Summary =====\\nThe paper introduces Curious Sample Planner (CSP) a long-horizon motion planning method that combines task and motion planning with deep reinforcement learning in order to solve simulated robotic tasks with sparse rewards. The CSP algorithm considers two different hierarchies of actions: primitive actions, which control the rotation of several joints in a robotic arm, and macro-actions corresponding to complex behaviours such as moving from one position to another or linking two objects together. Macro-actions are selected using the actor-critic architecture PPO and then turned into primitive actions using geometric motion planning and inverse kinematics. Specifically, RRT-Connect is used for motion planning with the recursive Newton-Euler algorithm for inverse kinematics on a perfect model of the environment to determine the specific sequence of primitive actions necessary to execute the macro-action. As CSP is interacting with the environment, it also builds a tree of states in the environment connected by the macro-actions leading to each of them. Each vertex of the tree is assigned a curiosity score, which is used as an exploration bonus for PPO and to determine the probability with which each vertex is sampled from the tree for future exploration. The whole process is repeated until a feasible path from the initial state to the goal state is found. The paper provides empirical evaluations in four different tasks where it compares the performance of CSP with three different curiosity measures to the performance of PPO and A2C. The results show that CSP accomplishes each task while using significantly less samples. Moreover, a second set of experiments is presented that show the potential for transfer learning across tasks using CSP.\", \"contributions\": \"1. The paper introduces CSP, a successful combination of task and motion planning and deep reinforcement learning that can discover temporally extended plans. \\n2. The paper demonstrates a statistically significant improvement in performance over PPO and A2C in the four robotic tasks that the paper studies. \\n3. The paper shows evidence that CSP might facilitate transfer learning across similar tasks. \\n\\n===== Decision ===== \\nThe paper represents a significant contribution to the reinforcement learning and task and motion planning literature. The main algorithm is well motivated from previous literature and demonstrate a significant improvement over previously proposed deep reinforcement learning methods. Moreover, the ideas are presented clearly and logically throughout the paper and the empirical evaluations clearly support the claims about the performance of CSP. However, I have concerns about the reproducibility of the results because of the little amount of details provided about the hyper-parameter selection and settings and about the network architectures and loss functions. Thus, I consider that the paper should be rejected, but I am willing to increase my score if my comments are properly addressed.\\n\\n===== Questions and Comments =====\\n\\n1. Although the ideas in the paper are presented clearly, the algorithms and methods are presented mostly at a very high level. There are no details about the losses used in lines 11 and 12 of Algorithm 1 and there are no specifications about the network architectures and the hyperparameter settings and selection for each algorithm. This raises two concerns. First, this hinders reproducibility and future work by other authors that might be interested in building upon the ideas presented in the paper; this would also decrease the impact of the paper. Two, it is difficult to determine if the comparisons against A2C and PPO were fair without any information about the hyper-parameter selection. Thus, I consider these details should be included and I would consider increasing my score to accept if this was properly addressed. Specifically, I think the paper should include this:\\n- The hyper-parameter settings for each different algorithm and an explanation about how they were selected. \\n- A detailed description of the network architectures used in the experiments.\\n- A definition for the loss functions used for the policy network, the value network, and the curiosity module. \\n\\n2. As mentioned in the Decision section above, the paper clearly demonstrates an improvement over previously proposed deep reinforcement learning algorithm. However, there are no comparisons to any previously proposed Task and Motion Planning Methods. What motivated this decision?\\n\\n3. An alternative to using separate networks for the policy and the value function, it could be possible to use a two-headed network with one head for the policy and another one for the value. What was the reason for using two separate networks over this alternative? \\n\\n4. Were any other alternatives tested for the activation functions of the network?\\n\\n5. CSP was tested with three different curiosity and novelty metrics, none of which dominated over all the other ones. However, PPO was only tested with one of the measures and A2C had no curiosity measure added to it. Where there any preliminary results that justified this decision? In terms of computation and time, how difficult would it be to include this in the paper?\\n\\n6. The paper already hinted at this, but macro-actions could be framed within the option framework from Sutton, Precup, & Singh (1999). This would open up the opportunity to apply some of the already proposed methods for option discovery such as the option-critic architecture from Bacon, Harb, & Precup (2016) or the Laplacian framework for option discovery from Machado, Bellemare, & Bowling (2017), which is cited on the paper. Could the authors provide more comments about this line of future work? \\n\\n===== References ===== \\nBacon, P., Harb, J., & Precup, D. (2016). The Option-Critic Architecture. Retrieved 17 October 2019, from https://arxiv.org/abs/1609.05140\\n\\nMarlos C. Machado, Marc G. Bellemare, and Michael H. Bowling. A laplacian framework for option discovery in reinforcement learning. CoRR, abs/1703.00956, 2017. URL http://arxiv.org/abs/1703.00956.\\n\\nSutton, R., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2), 181-211. doi: 10.1016/s0004-3702(99)00052-1\"}"
]
} |
BJxiqxSYPB | Learning to Prove Theorems by Learning to Generate Theorems | [
"Mingzhe Wang",
"Jia Deng"
] | We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach significantly improves the theorem prover and advances the state of the art of automated theorem proving in Metamath. | [
"theorems",
"theorem prover",
"task",
"automated theorem proving",
"key ai task",
"deep learning",
"promise",
"theorem provers",
"limited",
"proofs available"
] | Reject | https://openreview.net/pdf?id=BJxiqxSYPB | https://openreview.net/forum?id=BJxiqxSYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"CuwaHzEcx1",
"n2UryrKFWZ",
"HJgDu05noH",
"S1xrBR9hjS",
"HkePM05hiS",
"SygFxR53jB",
"Syl0Rpq2sH",
"ryeWi65niH",
"BJe7FtbscH",
"rylsGeGYcH",
"SyxzOra6FS"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1579200540420,
1576798750095,
1573854831343,
1573854780990,
1573854735126,
1573854704828,
1573854677744,
1573854616621,
1572702586763,
1572573203093,
1571833193687
],
"note_signatures": [
[
"~David_A_Wheeler1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2483/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2483/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2483/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Interesting work! Should be published.\", \"comment\": \"Note: I am not on the conference program committee, I am instead an interested bystander. I do have some connections with this paper that I believe I must make clear. I am a co-author of the cited book on Metamath, and I also have a long-standing background in AI. In any case, I hope my comments are helpful.\\n\\nThis paper should be published. The idea of using generators to improve machine learning is absolutely not new.\\nHowever, performing an experiment to actually applying this approach in the area of completely general-purpose unconstrained theorem provers *is* new.\\n\\nI disagree with the ICLR 2020 Conference Program Chairs' decision. The paper *is* tailored to one specific formal system (Metamath), but that is completely *necessary* today. Different formal systems are quite different, and it is unreasonable to expect any researchers to re-implement multiple massive systems to perform a single experiment. There is ongoing work to try to bridge these systems to allow interoperation, but until those efforts are ready (if they ever are), performing experiments using specific formal systems is the only way to make progress in this area given current research funding levels.\\n\\nTheir results are interesting. Simply re-executing Holophrasm with better hardware shows a remarkable improvement\\n(from 388 to 539). Their extensive work here produced a surprisingly modest additional gain, from 539 to 574 in the best case. That said, it is still a gain in a hard area, and it also demonstrates the challenges of the approach they've taken. Science needs not just papers that document spectacular improvements; it also needs to report how \\\"obvious\\\" approaches provide more modest improvements or even make things worse (especially if it appears plausible that the results would have been spectacular). This paper provides an important data point for those trying to improve ML-based systems to prove mathematical theorems.\\n\\nHere are my more specific comments.\", \"references\": \"the reference to \\\"Metamath: A Computer Language for Mathematical Proofs\\\" of 2019 lists Norman Megill's name, but it omits \\\"David A. Wheeler\\\" (the co-author). I hope you'll correct that :-).\", \"abstract\": \"Change \\\"we propose to learn\\\" to \\\"we propose to train\\\". Also, I would remove \\\"significantly\\\"; it's a modest improvement, but it's a modest improvement in a hard area and that is nothing to be ashamed of.\", \"page_3\": \"The definition of \\\"theorem\\\" here is different from the way it is used in the Metamath community (where it only refers to provable assertions). The term is used consistently in the paper, so I wouldn't change it, but it might be worth noting that difference here to reduce confusion.\", \"section_5\": \"The setup section says that once axioms were removed there were 21788 training thoerems, 2712 validation theorems, and 2720 training theorems. Those are exactly the same numbers as Holophrasm. Can I assume that you used exactly the same version of the set.mm database? If so, that should be clearly stated, as that makes it much clearer that you are keeping things constant in your experiment (which is good!).\\n\\nIt is my sincere hope that the code will soon be released with an open source software (OSS) license so others can replicate and build on this work. After all, this work builds on Holophrasm, which was released on GitHub as open source software. I searched on GitHub and https://paperswithcode.com/paper/learning-to-prove-theorems-by-learning-to but didn't find it. Please do so!\", \"here_are_several_easily_fixed_nits\": \"\", \"page_2\": \"Remove the first duplicate \\\"only\\\" in \\\"a prover only collects rewards only\\\".\", \"page_4\": \"Change \\\"Then sample\\\" to \\\"Then we sample\\\"\", \"page_7\": \"Change \\\"For other two experiments\\\" to \\\"For the other two experiments\\\"\", \"page_9\": \"Change \\\"It means even MetaGen-RL\\\" to \\\"It means that even if MetaGen-RL\\\"\", \"page_10\": \"Change \\\"It also find\\\" to \\\"It also finds\\\"\\n\\nPerhaps an editor could quickly check for missing articles (a/an/the), singular/plural agreement, and verb conjugation throughout the paper. These are easily fixed, and they are common problems (especially for non-native speakers). They should be fixed for clarity and so that these nits don't detract from the work here.\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to augment training data for theorem provers by learning a deep neural generator that generates data to train a prover, resulting in an improvement over the Holophrasm baseline prover. The results were restricted to one particular mathematical formalism -- MetaMath, a limitation raised one by reviewer.\\n\\nAll reviewers agree that it's an interesting method for addressing an important problem. However there were some concerns about the strength of the experimental results from R4 and R1. R4 in particular wanted to see results on more datasets, an assessment with which I agree. Although the authors argued vigorously against using other datasets, I am not convinced. For instance, they claim that other datasets do not afford the opportunity to generate new theorems, or the human proofs provided cannot be understood by an automatic prover. In their words, \\n\\n\\\"The idea of theorem generation can be applied to other systems beyond Metamath, but realizing it on another system is highly nontrivial. It can even involve new research challenges. In particular, due to large differences in logic foundations, grammar, inference rules, and benchmarking environments, the generation process, which is a key component of our approach, would be almost completely different for a new system. And the entire pipeline essentially needs to be re-designed and re-coded from scratch for a new formal system, which can require an unreasonable amount of engineering.\\\" \\n\\nIt sounds like they've essentially tailored their approach for this one dataset, which limits the generality of their approach, a limitation that was not discussed in the paper. \\n\\nThere is also only one baseline considered, which renders their experimental findings rather weak. For these reasons, I think this work is not quite ready for publication at ICLR 2020, although future versions with stronger baselines and experiments could be quite impactful.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary of our revision\", \"comment\": \"We thank all reviewers for their helpful comments! We revised our paper accordingly as follows.\\n\\n1. We added examples of the generated theorems in table 4 and corresponding discussion in the last two paragraphs of section 5.2.\\n\\n2. We added a paragraph to clarify the training of the generative model in the third paragraph of section 5.1.\\n\\n3. We added explanations on how we limit the number of candidate nodes for relevance networks of the generator in the last fourth paragraph of section 4.2.1.\\n\\n4. We updated the section 4.1 and 4.2.1 to clarify the problem setting and the construction of the theorem graph.\"}",
"{\"title\": \"Response to reviewer#3\", \"comment\": \"Thank you for your comments and your time for reviewing our submission. We address your questions below.\", \"q1\": \"What theory is formalized by set.mm? Set theory?\", \"a\": \"Thanks! We have addressed them in our revision.\", \"q2\": \"Among the proofs of 29337 theorems, which ones are used during the training of the generative model?\", \"q3\": \"minor comments\"}",
"{\"title\": \"Response to reviewer#1\", \"comment\": \"Thank you for your comments and your time for reviewing our submission. We address your questions below.\", \"q1\": \"Maybe it's better if you can shorten section 3 and explain more about the problem setting (such as how to fit this problem in a graph?).\", \"a\": \"To the best of our knowledge, MetaGen is the first generative model for theorems, so we are not aware of alternative models for comparison. Generative models developed for other domains such as images or texts are not directly applicable because theorem generation must comply with strict symbolic rules that generative models of images or natural texts do not need to handle.\", \"q2\": \"Can you show some examples of generated theorems?\", \"assertion\": \"( G \\\\in Group ) /\\\\ ( X \\\\in FiniteSet ) /\\\\ ( P \\\\in PrimeNumber) /\\\\ ( H \\\\in Sylow P-subgroup(G, p) ) -> ( H \\\\in SubGroup(G) )\", \"hypothesis\": \"X \\\\in Base(G) // X is a base extractor of G.\", \"q3\": \"You showed the prover has better performance with more synthetic data, but why is your model (generator) better? Can other generative models generate better proofs?\"}",
"{\"title\": \"Response to reviewer#4\", \"comment\": \"Q6: The paper claims that all theorems from set.mm are used as background theorems in algorithm 1, including the test ones -- this potentially sounds like training on the test set, or even worse, having access to the test theorems as \\\"proven background knowledge\\\" at test time.\", \"a\": \"The Holophrasm baseline is trained on human proofs by imitation learning the same as prior work [8]. We have added this information in our revision.\\n\\n[1] http://www.cs.ru.nl/~freek/100/ \\n[2] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An environment for machine learning of higher order logic theorem proving. In International Conference on Machine Learning, 2019a. \\n[3] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In International Conference on Machine Learning, 2019. \\n[4] Geoffrey Irving, Christian Szegedy, Alexander A Alemi, Niklas Ee \\u0301n, Franc \\u0327ois Chollet, and Josef Ur- ban. Deepmath-deep sequence models for premise selection. In Advances in Neural Information Processing Systems, 2016. \\n[5] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. arXiv preprint arXiv:1310.2805\\n[6] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search. arXiv preprint arXiv:1701.06972, 2017. \\n[7] Paulsson, Lawrence C., and Jasmin C. Blanchette. Three years of experience with Sledgehammer, a practical link between automatic and interactive theorem provers. \\n[8] Whalen, Daniel. \\\"Holophrasm: a neural automated theorem prover for higher-order logic.\\\" arXiv preprint arXiv:1608.02644(2016).\", \"q7\": \"Please include some more details about the training of the Holophrasm baseline. Does it simply do RL on the human theorems, or does it also do IL on human proofs?\"}",
"{\"title\": \"Response to reviewer#4\", \"comment\": \"Q3: There is also no comparison against non-neural approaches, such as Z3, Vampire, or similar theorem provers.\", \"a\": \"Our largest graph G has about 460K nodes from all human proofs and another 1M nodes from synthetic proofs. For relevance policy of the generator, the number of candidate nodes is limited to 2000. It means we sample 2000 nodes randomly if there are too many nodes fitting the current hypothesis. Therefore we can generate all 1M synthetic theorems in one graph. In the revision, we have added more details about how we limit the number of candidates for the relevance policy of the generator in the last fourth paragraph of section 4.2.1.\", \"q4\": \"Due to the 10-1-1 train-validation-test split, the neural agents are likely shown relatively similar problems during training as at test time, including potentially stronger versions of the same theorems.\", \"q5\": \"How big does the theorem graph G get? Since the relevance policy is over all nodes of the graph, this could lead to a very large neural network that would be difficult to fit into memory. Certainly not all 1M synthetic theorems could be generated in one graph.\"}",
"{\"title\": \"Response to Reviewer#4\", \"comment\": \"Thank you for your comments and your time for reviewing our submission. We address your individual points below in a QA format.\", \"q1\": \"The main result of the paper is that an extra 35/2720 (1.2%) of the test theorems are proven, a 6% improvement over the Holophrasm baseline of 539. It is difficult to judge how relevant of an improvement this is, and there is no analysis of the difficulty of the MetaMath problem set.\", \"a\": \"Set.mm in Metamath is a good benchmark for automated theorem proving. Mathmath only relies on substitution, the most general and fundamental inference rule of deductive reasoning, and therefore can serve as a meta-language to implement different logics, like first-order logic, higher-order logic, and set theory, while other systems are usually built on a particular logical foundation. Such simplicity and generality offer a unique advantage for developing ML provers, because we can generate all potential theorems by handling substitution only.\\n\\nSet.mm is the largest corpus of math theorems in Metamath. It contains 29,337 theorems and almost 1.5M proof steps. It implements the Tarski-Grothendieck set theory and covers various math topics, including but not limited to first-order logic, real and complex analysis, linear algebra, graph theory, elementary geometry and topology. It formalizes 71 of the \\u201ctop 100\\u201d math theorems, only behind HOL Light and Isabelle/HOL among all formal math databases [1] , and its coverage is still actively growing. This makes set.mm a good benchmark to train and evaluate learning-based theorem provers. \\n\\nThe idea of theorem generation can be applied to other systems beyond Metamath, but realizing it on another system is highly nontrivial. It can even involve new research challenges. In particular, due to large differences in logic foundations, grammar, inference rules, and benchmarking environments, the generation process, which is a key component of our approach, would be almost completely different for a new system. And the entire pipeline essentially needs to be re-designed and re-coded from scratch for a new formal system, which can require an unreasonable amount of engineering. Because of this, it is a standard practice in prior work to target a specific formal system and experiment only in this system [2,3,4,5,6,7,8]. \\n\\nIn addition, existing benchmarking environments for other systems have limitations that make it infeasible to implement our method. HOList [2] and CoqGym [3] are built on tactic-based theorem provers. Their environments only provide interfaces to call tactics implemented in backend provers. Most tactics execute backward reasoning. To generate new theorems, we need to be able to execute the corresponding reverse tactics, but this functionality is not provided in the current version of HOList and CoqGym. \\t\\n\\nOur approach cannot be directly applied to Mizar, because it does not provide human proofs in a format that can be understood by an automatic prover like the E prover (see [5]). Prior works have used machine learning to improve the E prover [4,5,6] on Mizar, but they have only trained on proofs automatically found by the E prover, not those written by humans. E expresses theorems as CNFs and proves by refutation at the level of CNF clauses. The CNF representation of theorems and proofs are incomprehensible to humans. Thus it is an open research question how to do forward reasoning to generate synthetic theorems in the CNF form that are similar to human theorems.\", \"q2\": \"The same method could be applied to datasets such as HOList, Mizar, and CoqGym which have received more attention recently than Metamath.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper focuses on the problem of developing deep learning systems that can prove theorems in a mathematical formalism -- in this case, MetaMath. This has been a rapidly growing topic in the past few years, as evidenced by the numerous cited works. What sets this work apart from others is its focus on the instrumental task of generating data to train a prover, rather than directly training the prover on human theorems (via reinforcement learning) or human proofs (via imitation learning).\\n\\nThe paper proposer two approaches to generating theorems imitation learning (IL) and reinforcement learning (RL). The IL approach trains a neural policy to imitate the same steps taken in human proofs. The RL approach first trains a language model on human theorems (not proofs), and uses the likelihood under the model as a reward function for an RL agent which must take forward proof steps.\\n\\nBoth approaches result in a policy that can be used to take proof steps, with the goal of producing new theorems which are similar to the human ones. Since the proof steps are known for the generated theorems, a prover agent (which operates in backwards mode, working from the goal back to the hypotheses) can be trained to imitate the steps taken in the synthetic proofs (along with the human ones, if any are present).\\n\\nAt test time, the learned prover imitation policy is then used to guide an MCTS agent, as described in the Holophrasm paper. It is compared against the original Holophrasm algorithm, rerun on modern hardware.\\n\\nThis is to my knowledge a novel approach in the neural theorem proving domain, and in my opinion one that offers a potentially significant advantage over the existing fixed-dataset appraoches.\\n\\nThe main result of the paper is that an extra 35/2720 (1.2%) of the test theorems are proven, a 6% improvement over the Holophrasm baseline of 539. It is difficult to judge how relevant of an improvement this is, and there is no analysis of the difficulty of the MetaMath problem set. In addition, due to the 10-1-1 train-validation-test split, the neural agents are likely shown relatively similar problems during training as at test time, including potentially stronger versions of the same theorems. There is also no comparison against non-neural approaches, such as Z3, Vampire, or similar theorem provers. \\n\\nTo accept this paper, I would like to see stronger evidence that the introduced method produces significant improvements in prover ability. For example, the same method could be applied to datasets such as HOList, Mizar, and CoqGym which have received more attention recently than MetaMath.\", \"some_additional_questions_and_comments\": \"1. How big does the theorem graph G get? Since the relevance policy is over all nodes of the graph, this could lead to a very large neural network that would be difficult to fit into memory. Certainly not all 1M synthetic theorems could be generated in one graph.\\n2. The paper claims that all theorems from set.mm are used as background theorems in algorithm 1, including the test ones -- this potentially sounds like training on the test set, or even worse, having access to the test theorems as \\\"proven background knowledge\\\" at test time.\\n3. Please include some more details about the training of the Holophrasm baseline. Does it simply do RL on the human theorems, or does it also do IL on human proofs?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper focuses on the task of automated theorem proving. To address the low availability of human-written data and low sample efficiency in reinforcement learning, the authors propose to augment data by generating synthetic theorem data with a deep neural network-based model. Experimental results show the usefulness of the generated synthetic theorem.\\n\\nThis paper is well-motivated and the proposed method is quite novel for automated theorem proving. The paper is well-supported by theorems, however, the experimental analysis is a little weak. For the above reasons, I tend to accept this paper but wouldn't mind rejecting it.\", \"questions\": \"1. Maybe it's better if you can shorten section 3 and explain more about the problem setting (such as how to fit this problem in a graph?).\\n2. Can you show some examples of generated theorems?\\n3. You showed the prover has better performance with more synthetic data, but why is your model (generator) better? Can other generative models generate better proofs?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes a generative model for proofs in Metamath, a language for formalizing mathematics. The model includes neural networks, which provide guidance about which fact to try to prove next and how to prove the fact from the facts derived so far. The parameters of these networks are learned from existing proofs or theorem statements. The main purpose of this model is to generate synthetic theorems and proofs that can be used to train the neural networks of a data-driven search-based theorem prover. The experiments with the Metamath set.mm knowledge base show the benefits of the synthetically generated proofs for building a data-driven theorem prover.\", \"I think that the paper studies an important problem and contains interesting ideas. The idea of using a language model for theorem statements (so that a generated theorem can be meaningfully compared with a given theorem even when they are not the same) looks sensible. Also, the conjecture that a good proof generator is likely to lead to a good theorem prover sounds plausible.\", \"I find the description of the training of the generative model in the experiments slightly confusing. Adding some clarification may help some readers. More specifically, here are some questions that I couldn't answer for myself. What theory is formalized by set.mm? Set theory? Among the proofs of 29337 theorems, which ones are used during the training of the generative model?\", \"Here are some minor comments.\", \"p1: positive awards ===> positive rewards\", \"p2: A citation is missing in the first sentence of Section 2.\", \"AddNode, Algorithm1, p5: Merge h_q to h' ===> Merge h_q to h\", \"p6: uses a_v as a precondition ===> uses a_u as a precondition\", \"p6: and has been ===> has been\", \"p7: which demonstrate ===> which demonstrates\", \"p7: from these the relevance ===> from the relevance\", \"p7: wiht ===> with\", \"p9: languagee ===> language\"]}"
]
} |
BJe55gBtvH | Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem | [
"Vaggos Chatziafratis",
"Sai Ganesh Nagarajan",
"Ioannis Panageas",
"Xiao Wang"
] | Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarsky’s work reveals the limitations of shallow neural networks, it doesn’t inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths.
In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke – we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions. | [
"Depth-Width trade-offs",
"ReLU networks",
"chaos theory",
"Sharkovsky Theorem",
"dynamical systems"
] | Accept (Spotlight) | https://openreview.net/pdf?id=BJe55gBtvH | https://openreview.net/forum?id=BJe55gBtvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"pWo2QQ3uy-",
"rJloHCKhjr",
"H1g-XB1hsB",
"BJegW4y3iH",
"HJgnwZWW5H",
"ryl5620g5S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1576798750065,
1573850690927,
1573807385475,
1573807095952,
1572045155573,
1572035777594
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2482/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2482/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2482/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2482/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The article is concerned with depth width tradeoffs in the representation of functions with neural networks. The article presents connections between expressivity of neural networks and dynamical systems, and obtains lower bounds on the width to represent periodic functions as a function of the depth. These are relevant advances and new perspectives for the theoretical study of neural networks. The reviewers were very positive about this article. The authors' responses also addressed comments from the initial reviews.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to author's comment\", \"comment\": \"I greatly appreciate the author's thorough response! I also appreciate the inclusion of the synthetic dataset! Unfortunately, it isn't possible for me to raise my score any higher (because it is already at the maximum).\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"First we thank Reviewer 2 for their time, positive feedback and valuable comments.\\n\\nFollowing the reviewer\\u2019s suggestion, we have restructured the paper in the newer version in such a way so that the main results and the contribution to ML come first, before the technical details based on dynamical systems theory. Of course, feel free to suggest any other change you think would improve the current write-up. We also provided an example for Definition 2 and Definition 3 to make the presentation cleaner. We also added a discussion section in the Appendix with several additions/comments.\", \"regarding_the_question_of_usefulness\": \"In terms of theoretical advantages, our paper in a nutshell gives a *natural* property of a function (periodic points of certain periods) and then derives depth-width trade-offs based on it. This addresses some questions raised in Telgarsky\\u2019s work, but also in the paper \\u201cExponential expressivity in deep neural networks through transient chaos\\u201d (https://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos) that seeks to provide a natural, general measure of functional complexity helping us understand the benefits of depth. On the contrary, many of previous depth separation results take a worst case approach for the representation question (showing that there exist functions implemented by deep networks that are hard to approximate with a shallow net). However, it is not clear whether such analysis applies to the typical instances arising in practice of neural-networks. We believe that our work together with Telgasky\\u2019s and the paper \\u201cThe power of depth for feedforward neural networks\\u201d (by Eldan/Shamir) show a depth separation argument for very natural functions, like the triangle waves or the indicator function of the unit ball. \\n\\nContinuing with the question of given a specific prediction task, how could one assess the period, we agree that this would be extremely useful in practice but this is indeed a very difficult question that seems to be outside the reach of current techniques in the literature. Previous works and our work so far are able to present depth separation for representing certain functions. \\n\\nWe would like to point out that, intuitively, our characterization result consists of a certificate informing us qualitatively and quantitatively about which functions have complicated compositions and which not. Similar to computational problems in class NP, if one is given the certificate (the points x_1,...,x_p), then one can easily verify if the given function has a p-periodic cycle with points x_1,...,x_p (given oracle access to the function). Nevertheless, we believe that finding the certificate for arbitrary continuous functions is not a straightforward problem except maybe for restricted classes of functions. Having said that, we want to emphasize that in many prediction problems that are inspired by physics, one may a priori expect to have complicated dynamics behaviour and hence requiring deeper networks for better performance. Such examples include efforts to solve the notorious 3-body problem or turbulent flows showing empirical evidence that complex physical processes require deep networks (see for instance, \\u201cReynolds averaged turbulence modelling using deep neural networks with embedded invariance\\u201d (2016) and \\u201cNewton vs the machine: solving the chaotic three-body problem using deep neural networks\\u201d (2019) (https://arxiv.org/pdf/1910.07291.pdf) where they use a 10 layered neural network.)\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"First we thank Reviewer 1 for their time, positive feedback and valuable comments. Both questions the reviewer asked are very interesting and important and we answer them below.\", \"addressing_the_first_question_regarding_the_bias_term\": \"If the reviewer asks about adding a bias term in the ReLU activation unit, e.g., use \\\\max(v,\\\\epsilon) instead of \\\\max(v,0) for the activation gates, where \\\\epsilon is a small number (positive or negative), then our results do not change; in particular our tradeoff in Theorem 4.1 still holds (since the Lemma 2.2 from Telgarsky is for general sawtooth functions). If the reviewer asks about what happens if one adds the bias term to the function f itself, then things get more interesting indeed: Suppose f has some period p where p is not a power of two; due to bifurcation phenomena (i.e., phenomena arising because we are at critical regimes of parameters like the \\\\mu parameter in our generalized triangle wave function), then the compositions of the function (f+bias term) with itself may give rise to different behaviours qualitatively compared to f. In particular, the function (f+bias term) might not have period p anymore. Intuitively you can think that the small bias term is amplified after many compositions and is not negligible anymore.\\n\\nSuch a brittle example is the triangle function f(x) = \\\\phi * x for 0<=x<=\\u00bd and \\\\phi(1-x) for 1/2<=x<=1 where \\\\phi = (1+\\\\sqrt{5})/2 is the golden ratio. This function is easy to see that has period 3 (we include illustrative figures in the newer version of the paper). However, if we consider the function g(x) = (\\\\phi-\\\\epsilon)x for 0<=x<=\\u00bd and (\\\\phi-\\\\epsilon)(1-x) for 1/2<=x<=1 with \\\\epsilon>0 (arbitrarily small positive) then g does not have period 3. In this sense, period as a property can be brittle to numerical changes if we are at the critical point. We updated the newer version as well with this brittle example.\", \"addressing_the_second_question_regarding_empirical_intuition_and_performance_for_the_classification_error\": \"Here we performed experiments on the synthetic dataset generated by the triangle functions and trained neural networks with different depths (both at regimes where representation is possible and at regimes where it is impossible). We plotted the classification error as a function of the depth as the reviewer suggested and we include the figures in the newer version of the paper. Some details for the experimental setup are included as well. The code together with the final figure are added to the google doc containing our code for the ICLR submission (we added a Python Script for the Neural Network experiment (To be run in ipython 3 enivronment)).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In tackling a curious construction by Telgarsky regarding a certain class of functions that can be represented by deep networks (but not shallow networks (unless those shallow networks have exponentially many units)), the authors derive depth-width tradeoff conditions for when relu networks are able to represent periodic functions using dynamical systems analysis.\\n\\nThis paper was a delight to read. I particularly enjoyed the motivating examples, and the clean exposition of Sharkovsky's theorem. This result seems to cleanly answer the open question originally posed by Telgarsky, and the proofs are cleanly written, and correct to my (admittedly not perfect) knowledge. I strongly suggest acceptance.\\n\\nQuestions/comments:\\n\\n1. Could the author speculate on how the introduction of a bias term might affect their lower bound? Presumably, this breaks the cleanness of the characteristic polynomial for $A$, but perhaps there are limits where it's still tractable? This analysis certainly isn't necessary for publishing--I'm simply curious.\\n\\n2. Could the authors provide some guiding intuition for the sharpness of their lower bound? (possibly on a synthetic dataset?) . I'm particularly imagining a plot that literally shows \\\"classification error\\\" versus \\\"depth\\\" for some fixed task. While this is certainly a strong theoretical result, it would be nice to be able to contextualize how this result actually shines for a \\\"real\\\" model (and would help me believe the result \\\"in my gut\\\" so to speak).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper studies how the expressive power of NN depends on its depth and width. Sharkovsky's theorem is leveraged to characterize the depth-width tradeoff in the ability of ReLU networks to represent functions with periodic points. A lower bound on the depth necessary to represent periodic functions is also provided. All in all, the paper furthers the understanding on the benefit of deep nets for representing certain function classes.\\n\\nI found this to be a serious and well-written paper. The application of Sharkovsky's results is clever and well in place. My main criticism has to do with the structure, which I think overloads with general theory before getting to the main point the paper is making. I suggest stating Theorem 4.1 earlier, even as soon as Section 1.3, and use the discussion therein as an interpretation of the result. All the technical details, such as definition, Sharkovsky's Thm and proofs, can follows after than. The theoretical background is very interesting, but it would be better to start from the contribution to ML and get into the math later on. \\n\\nThe period dependent depth lower bound is nice but not very useful. Given a certain classification task, how could one assess/bound/approximate the period? This is general issue with this type of theory -- while it broadens our understanding it is hard to put it into actual use.\", \"another_small_comment\": \"it would be useful to provide intuition for some of the definitions in the paper. For example Def. 3 lacks such.\"}"
]
} |
HyeKcgHFvS | Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces | [
"Alexander Gepperth",
"Benedikt Pfülb"
] | We present an approach for efficiently training Gaussian Mixture Models (GMMs) with Stochastic Gradient Descent (SGD) on large amounts of high-dimensional data (e.g., images). In such a scenario, SGD is strongly superior in terms of execution time and memory usage, although it is conceptually more complex than the traditional Expectation-Maximization (EM) algorithm.
For enabling SGD training, we propose three novel ideas:
First, we show that minimizing an upper bound to the GMM log likelihood instead of the full one is feasible and numerically much more stable way in high-dimensional spaces.
Secondly, we propose a new regularizer that prevents SGD from converging to pathological local minima.
And lastly, we present a simple method for enforcing the constraints inherent to GMM training when using SGD.
We also propose an SGD-compatible simplification to the full GMM model based on local principal directions, which avoids excessive memory use in high-dimensional spaces due to quadratic growth of covariance matrices.
Experiments on several standard image datasets show the validity of our approach, and we provide a publicly available TensorFlow implementation. | [
"GMM",
"SGD"
] | Reject | https://openreview.net/pdf?id=HyeKcgHFvS | https://openreview.net/forum?id=HyeKcgHFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Kb089OOeb",
"r1e28s73iB",
"rkevwcX3iS",
"BkeHHcXnoB",
"r1gV3D7njr",
"rkgY7q_4cS",
"HkeSb3jgqS",
"ByxznYJAFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798750035,
1573825364502,
1573825119429,
1573825085250,
1573824427608,
1572272672963,
1572023292613,
1571842473745
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2480/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2480/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2480/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2480/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": [\"The paper presents an SGD-based learning of a Gaussian mixture model, designed to match a data streaming setting.\", \"The reviews state that the paper contains some quite good points, such as\", \"the simplicity and scalability of the method, and its robustness w.r.t. the initialization of the approach;\", \"the SOM-like approach used to avoid degenerated solutions;\", \"Among the weaknesses are\", \"an insufficient discussion wrt the state of the art, e.g. for online EM;\", \"the description of the approach seems yet not mature (e.g., the constraint enforcement boils down to considering that the $\\\\pi_k$ are obtained using softmax; the discussion about the diagonal covariance matrix vs the use of local principal directions is not crystal clear);\", \"the fact that experiments need be strengthened.\", \"I thus encourage the authors to rewrite and polish the paper, simplifying the description of the approach and better positioning it w.r.t. the state of the art (in particular, mentioning the data streaming motivation from the start). Also, more evidence, and a more thorough analysis thereof, must be provided to back up the approach and understand its limitations.\"], \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to reviewers\", \"comment\": \"Thank you for the comprehensive review, it is the first time we publish on GMMs so this feedback is invaluable. Here our responses, we could not incorporate all the changes yet but they will come!\\n\\n- max-component: the use of the log(max p_k) instead of log(sum_k p_k) is sufficiently motivated to avoid well-known numerical problems that arise from directly computing the p_k rather than in log space. However, it is also a standard trick (e.g., in logsoftmax computations) to compute log(sum_k p_k) from log-probabilities using (for k* in argmax_k p_k): log(sum_k p_k) = log(p_k*) + log(sum_k exp(log p_k - log p_k*)) which is numerically more stable. Maybe I'm missing something, but I do not understand the advantage of the max-component compared to this, so it seems to me that the max-component trick is a fairly limited contribution in itself.\\n**RESPONSE** We are aware of this trick. It is ok for EM where it cures numerical instabilities by normalizing by the max. If we do this for the full gradient of the log-likelihood (which is different from the simplified EM form), the numerical instabilities are not cured.\\n\\n- the main novelty seems to be the regularizer. The authors present experiments to show the effectiveness of it to avoid single-component solutions that may arise from random initialization, which is an interesting point of the paper. The motivation for smoothing on the 2D grid is still somewhat mysterious to me, even though the relationship with SOMs of Appendix B is interesting.\\n**RESPONSE** the motivation comes actually from the SOM domain. Maybe the term \\\"regularizer\\\" is misleading, it is actually more akin to annealing, with the same goal of avoiding spurious local minima.\\n\\n- the paper falls a bit short in showing any improvement compared to other baselines. The experiments describe some ablations, but does not really answer some questions: is regularization important if one uses K-means initialization? are there any practical advantages (e.g. learning speed, hyper-parameter tuning) compared to standard EM? The authors say that the selection of the starting point in EM is important. This is a fair point, but it also seems solved by K-means.\\n**RESPONSE** we wish to use these SGD-based GMMs in streaming settings, where the entirety of data is not known in advance. We will clarify this.\\n\\nWhile the authors describe a \\\"tutorial\\\" for choosing the hyper parameters of their method, it still seems fairly manual. So the practical advantage of the method (which probably exists) would benefit from more comparisons.\\n\\n- one of the difficulties in training GMMs comes from learning covariance matrices. While the authors discuss some way to train low-rank models, the only successful results seem to be with diagonal covariance matrices, which seems much easier. For instance, at least a toy example in which the low-rank version is useful would be interesting.\\n**RESPONSE** we will try to add one. The point of the principal directions approach is that one effectively no longer uses a diagonal covariance matrix, but an approximation to the full one since the local directions of maximal covariance are learned from data.\\n\\n Overall, it seems to me that the work is serious, and describes possibly interesting how-tos for training GMMs. The main contribution is to describe a simple method to learn GMMs from random initialization. The main technical novelty seems the regularizer, which seems to work. The method has the advantage of simplicity, but successful results have only been shown with diagonal covariance matrices and it is unclear exactly what is gained over EM+K-means initialization.\\n**RESPONSE** the main point (that wasn't maybe stated as clearly as it could have been) is that we wish to perform SGD in order to use GMMs in an online, incremental setting where future data are not available, or might be subject to changes in statistics. So, initializing with K-means is not an option because in this case we would need to see a large chunk of data from the future. And even if we could, the initialization this would give us could be a harmful one when data statistics suddenly change.\", \"other_comments\": \"- negative log-likelihood is used in the results section. It would be good to clarify it somewhere since the paper only mentions \\\"log-likelihood\\\" but report a loss that should be minimized - Section 4.4 \\\"Differently from the other experiments, we choose a learning rate of = 0.02 since the original value does not lead to convergent learning. The alternative is to double the training time which works as well.\\\" -> in the first sentence, I suppose it is more a matter of \\\"slow learning\\\" than \\\"non-convergent learning\\\"\\n**RESPONSE** we fully agree and will clarify this!\"}",
"{\"title\": \"Response to reviewer, part II\", \"comment\": \"Section 3.3: The first sentence is wrong - EM can also suffer from the problem of local minima.\\n**RESPONSE** agreed! This will be clarified.\\n\\nAlso, the single-component solution doesn't seem like a local minimum - but rather the log-likelihood is unbounded here since you can put another component on one of the data points with infinitely small variance. The degenerate solution where all Gaussians have equal weights, mean and variance does not seem like a local minimum of the log-likelihood. Say the data really comes from 2 distinct Gaussians - then separating the two Gaussians means a bit would increase the log-likelihood. is not a local minimum. I'm not even sure if the gradient is zero at this point - the authors should show this. Maybe the authors mean their modified loss L_MC - this should be stated clearly.\\n**RESPONSE** We updated the corresponsing section of the paper, giving the gradient explicitly and showing that both for degenerate and single-component solutions the gradient is zero. Which does not necessarily indicate a local minimum, but even saddle points are usually hard to get out of. \\n\\nThe change of regularization during the training process seems like a heuristic that worked well for the authors, but it is thus unclear what optimization problem is the optimization solving. The regularization is thus included for optimization reasons, and not in the usual sense of regularization.\\n**RESPONSE** you are right. Maybe \\u201cregularization\\u201d is the wrong term here: what we do is much more akin to annealing, starting with a high \\u201ctemperature\\u201d (radius) and reducing it over time. We will adapt this in the whole paper.\\n\\nThe expression for \\\\tau at the end of Section 3.3 seems wrong to me. I don't see how plugging it into eq. (7) gives a continuous \\\\sigma(t) function.\\n**RESPONSE** We respectfully disagree: if you plug \\\\tau into the expression \\\\sigma_0 \\\\exp(-t/\\\\tau), you get an exponential that is \\\\sigma_0 at t=t_0 and \\\\sigma_\\\\infty at t=t_\\\\infty\\n\\nSection 3.4: What is \\\\mu^i? I didn't see a definition I don't understand the local principal directions covariance structure. The authors write 'a diagonal covariance matrix of S < D entries). But what about the other D-S coordinates? are they all zero? or can have any values? The parameters count lists (S+1)*D+1 for each Gaussian so I'm assuming S*D parameters are used for the covariance matrix, but it is unclear how. Eq. (9) has the parameters vectors d_{ks} for the principal directions, together with the \\\\Sigma_ss scalar values - it would be good to relate them to the mean and variance of the Gaussians.\\n**RESPONSE** the \\\\mu^i are just ad-hoc numerical parameters that define the initialization range of the prototypes, we will clarify this.\\n**RESPONSE** concerning the principal directions: we still assume a diagonal covariance matrix, but instead of computing the covariances along the coordinate axes, we introduce a set of D principal directions per prototype, along which covariances are adapted. If we had D principal directions per prototype, this would mean K*D*D additional parameters. However, if we learn these principal directions using a PCA-like mechanism, we can actually ignore most of them and keep only the first S<<D of them, resulting in K*S*D parameters. The corresponding entries on the diagonal of the covariance matrix would be zero for the ignored directions. We will describe this better!\", \"section_4\": \"The paragraph that describes the experimental details at the beginning is repeated twice.\\n**RESPONSE** oops! Thanks for pointing this out!\\n\\n The experimental results are not very convincing. The images in Figure 2,4 were picked by an unknown (manual?) criteria. In the comparison to EM in Figure 3 there are missing details - which log-likelihood is used? L, L_MC? or different ones for different methods? is this test set log-likelihood? what fraction of the data was used? There is also no comparison of running time between the two methods.\\n**RESPONSE** we will add the required information and add more results in the appendix. Yes, we always use test set log likelihood, and the log likelihood is the real one (ie not L_LC), otherwise it would not be a fair comparison to EM.\"}",
"{\"title\": \"Response to reviewers\", \"comment\": \"Thank you for the comprehensive review, it is the first time we publish on GMMs so this feedback is invaluable. Here our responses, we could not incorporate all the changes yet but they will come!\\n\\nThere are no guarantees for convergence/other performance measures for the new method, in contrast to recent methods based on moment matching (e.g Ge, Huand and Kakade, 2015). Therefore, the new method and paper should provide excellent empirical results to be worthy of publication.\\n**RESPONSE** The guarantees for convergence come, to our mind, simply from the fact that we minimize an energy function (bounded from below) by gradient descent. If the step size is small enough, we are sure to reach at leats a local minimum and stay there, ensuring stability. It is not the same case as for EM, we one needs to explicitly show that E and M steps cannot increase the log-likelihood. The paper you mentioned is interesting, but requires ahead-of-time knowledge (and computation) of at least the fourth moments of the data, which requires batch processing. Our approach focusses on online GMM where samples arrive one by one and future samples are unknown until they arrive. \\n\\nThe authors write in the 'related work' section that GMM with regularization was proposed by [Verbeek et al. 2005], but it is an older idea - for example [Ormoneit&Tresp 1998]\\n\\u2192 will be corrected.\\n\\nIn Section 3.1, the motivation for the maximization in eq. (2) is unclear. Why is it easier to compute the gradient this way rather than keep the original likelihood? Moreover, the max operation is non-smooth and can cause problems with the definition of the gradient at some points.\\n**RESPONSE** the gradient is much easier to compute and numerically more stable because the log of the maximal likelihood is a a very simple function of the model parameters, namely the argument of the exponential\\n**RESPONSE** we beg to differ concerning the max-operation: it is smooth everywhere but potentially non-differentiable at a set of points of measure zero. If it is just a zero-measure set of points where the derivative is undefined, we can handle this case: the same is done for ReLU (smooth everywhere but non-differentiable at x=0)\\n\\n The authors point to a problem of underflow/overflow when evaluating the gradient of the full log-likelihood because the densities p(x | k) can be very small - but it is standard practice in EM to keep all probabilities multiplied by say their maximum p_max and keep log(p_max) separately, to avoid underflow problems.\\n**RESPONSE** We are very much aware of this trick. It is ok for EM where it cures numerical instabilities by normalizing by the max. If we do this for the full gradient of the log-likelihood (which is different from the simplified EM form), the numerical instabilities are not cured at all.\\n\\nSection 3.2: I don't understand the requirement \\\\Sigma_{i,j} >=0. Is it a requirement for each entry of the covariance matrix? (which covariance matrix? there are K such matrices). The requirement should be that each matrix is positive-definite, not the entries.\\n**RESPONSE** true. But since all we treat here are diagonal covariance matrices anyway, their entries must be positive for positive-definiteness. We will clarify this.\"}",
"{\"title\": \"Response to reviewer\", \"comment\": \"Thank you for the effort of reviewing our paper! Here out comments to your remarks:\\n\\n I feel that this work is largely incremental, but more importantly indicating the authors' lack of understanding of the very long history of (online) EM. While the authors do acknowledge some of the online EM work, they go on to develop what is a rather ad-hoc approach to online EM.\\n**RESPONSE** to put this very clearly: we do not address online EM here. We address Stochastic Gradient Descent, which is an, altogether different thing. This is why we mention only a few and recent works on online EM because they are somewhat related concerning their intent, but not at all in their way of achieving this.\\n\\nThe max-component approximation in Sec. 3.1 is claimed to address the issue of numerical stability. The authors do not appear to resort to the log-sum-exp \\\"trick\\\", which tackles such problems. (In fact, their max approx is of this type.)\\n**RESPONSE** We are very much aware of this trick. It is ok for EM where it cures numerical instabilities by normalizing by the max. If we do this for the full gradient of the log-likelihood (which is different from the simplified EM form), the numerical instabilities are not cured at all.\\n\\nSec. 3.2 uses a very standard representation of multinouli in terms of its natural parameters, which the authors again do not refer to.\\n**RESPONSE** could you clarify this? Is there a specific reference you feel we should cite?\\n\\nThe \\\"smoothing\\\" in Sec. 3.3 is hard to justify and difficult to understand, esp. the gridding approach. Why not use hierarchical priors instead?\\n**RESPONSE** the gridding approach is borrowed from SOMs where it is very effective for ensuring convergence (even in the absence of an energy function). We should maybe better call this procedure \\\"annealing\\\" rather than \\\"regularization\\\" since that's what happens: we start training at a high \\\"temperature\\\" (radius), where all prototypes/centroids are forced to be similar, and later relax the temperature so differences can \\\"crystallize out\\\". Hierarchical priors might be useful for initially allowing large values for the variances and then over time to reduce these. Not possible to do for this article, but this will be picked up in subsequent work.\\n\\nIn Sec. 3.4, additional smoothing is accomplished using a subspace approach, which requires QR decomposition. How will this affect computational efficiency, if the subspace needs to be recomputed?\\n**RESPONSE** a fair point. We plan to replace the QR decomposition by an orthonormalization term in the gradient. This works well as long as the different principal directions are orthonormal in the beginning, which is simple to achieve.\\n\\nFinally, I have strong concerns about the experimental evaluation. The authors choose datasets where the sample ambient space is at most 28x28, which is not exactly (very) high-dimensional. I am mostly concerned about the evaluation.\\n**RESPONSE** We chose SVHN as well, which is 32x32x3 so 3000 dimensions. Always difficult to define what high-dimensional means, but when considering what GMMs are normally applied to (2-50 dimensions at most) we believe this qualifies as high-dimensional.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper tackles the problem of online learning for GMMs, in the context of high-dimensional data. Specifically, the authors limit the scope to SGD-like approaches and EM-like optimization. They also offer a TF implementation.\\n\\nI feel that this work is largely incremental, but more importantly indicating the authors' lack of understanding of the very long history of (online) EM. While the authors do acknowledge some of the online EM work, they go on to develop what is a rather ad-hoc approach to online EM.\\n\\nThe max-component approximation in Sec. 3.1 is claimed to address the issue of numerical stability. The authors do not appear to resort to the log-sum-exp \\\"trick\\\", which tackles such problems. (In fact, their max approx is of this type.)\\n\\nSec. 3.2 uses a very standard representation of multinouli in terms of its natural parameters, which the authors again do not refer to.\\n\\nThe \\\"smoothing\\\" in Sec. 3.3 is hard to justify and difficult to understand, esp. the gridding approach. Why not use hierarchical priors instead?\\n\\nIn Sec. 3.4, additional smoothing is accomplished using a subspace approach, which requires QR decomposition. How will this affect computational efficiency, if the subspace needs to be recomputed?\\n\\nFinally, I have strong concerns about the experimental evaluation. The authors choose datasets where the sample ambient space is at most 28x28, which is not exactly (very) high-dimensional. \\n\\nI am mostly concerned about the evaluation.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new method based on Stochastic Gradient Descent to train Gaussian Mixture Models, and studies this method especially in the context of high dimension.\\nThe method is based on optimizing max-lower bound to the log-likelihood, together with a regularization term, using stochastic gradient descent. \\nThe method is applied to two image datasets, and seem to produce sensible results. \\n\\nHowever, in my opinion, the results and presentation do not seem at the level suitable for publication.\\n\\nThere are no guarantees for convergence/other performance measures for the new method, in contrast to recent methods based on moment matching (e.g Ge, Huand and Kakade, 2015). Therefore, the new method and paper should provide excellent empirical results to be worthy of publication. \\n\\nThe authors write in the 'related work' section that GMM with regularization was proposed by [Verbeek et al. 2005], but it is an older idea - for example [Ormoneit&Tresp 1998] \\n\\nIn Section 3.1, the motivation for the maximization in eq. (2) is unclear. \\nWhy is it easier to compute the gradient this way rather than keep the original likelihood?\\nMoreover, the max operation is non-smooth and can cause problems with the definition of the gradient at some points. \\nThe authors point to a problem of underflow/overflow when evaluating the gradient of the \\nfull log-likelihood because the densities p(x | k) can be very small - but it is standard practice in EM to keep all probabilities multiplied by say their maximum p_max and keep log(p_max) separately, to avoid underflow problems. \\n\\nSection 3.2: I don't understand the requirement \\\\Sigma_{i,j} >=0. Is it a requirement for each entry of the covariance matrix? (which covariance matrix? there are K such matrices). The requirement should be that each matrix is positive-definite, not the entries.\\n\\nSection 3.3: The first sentence is wrong - EM can also suffer from the problem of local minima. \\nAlso, the single-component solution doesn't seem like a local minimum - but rather the log-likelihood is unbounded here\\nsince you can put another component on one of the data points with infinitely small variance. \\nThe degenerate solution where all Gaussians have equal weights, mean and variance does not seem like a local minimum\\nof the log-likelihood. Say the data really comes from 2 distinct Gaussians - then separating the two Gaussians means a bit would increase the log-likelihood. is not a local minimum. I'm not even sure if the gradient is zero at this point - the authors should show this. Maybe the authors mean their modified loss L_MC - this should be stated clearly. \\n\\nThe change of regularization during the training process seems like a heuristic that worked well for the authors, but it is thus unclear what optimization problem is the optimization solving. The regularization is thus included for optimization reasons, and not in the usual sense of regularization. \\n\\n\\nThe expression for \\\\tau at the end of Section 3.3 seems wrong to me. I don't see how plugging it into eq. (7) gives a continuous \\\\sigma(t) function. \\n\\nSection 3.4: What is \\\\mu^i? I didn't see a definition \\n\\nI don't understand the local principal directions covariance structure. The authors write 'a diagonal covariance matrix of S < D entries). But what about the other D-S coordinates? are they all zero? or can have any values? \\nThe parameters count lists (S+1)*D+1 for each Gaussian so I'm assuming S*D parameters are used for the covariance matrix, but it is unclear how. Eq. (9) has the parameters vectors d_{ks} for the principal directions, together with the \\\\Sigma_ss scalar values - it would be good to relate them to the mean and variance of the Gaussians.\", \"section_4\": \"The paragraph that describes the experimental details at the beginning is repeated twice.\\n\\nThe experimental results are not very convincing. The images in Figure 2,4 were picked by an unknown (manual?) criteria. \\n\\nIn the comparison to EM in Figure 3 there are missing details - which log-likelihood is used? L, L_MC? or different ones for different methods? is this test set log-likelihood? what fraction of the data was used? \\nThere is also no comparison of running time between the two methods.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper describes in detail a proper implementation of SGD for learning GMMs. GMMs are admittedly one of the basic models in unsupervised learning, so the topic is relevant to ICLR, even though they are not particularly a hot topic.\\n\\nThe paper is overall clear and well-written. The main contributions are an effective learning of GMMs from random initialization that is competitive (in terms of final loss) to EM training with K-means initialization. The authors discuss a max-component loss instead of standard likelihood for numerical stability, using a softmax reparametrization for component weights to ensure they are in the simplex, and an annealed smoothing of the max-likelihood based on arbitrarily embedding components indexes on a 2D regular grid. Experiments are shown on MNIST and SVHN comparing different hyper parameter settings to a baseline EM implementation from scikit-learn.\\n\\n- max-component: the use of the log(max p_k) instead of log(sum_k p_k) is sufficiently motivated to avoid well-known numerical problems that arise from directly computing the p_k rather than in log space. However, it is also a standard trick (e.g., in logsoftmax computations) to compute log(sum_k p_k) from log-probabilities using (for k* in argmax_k p_k):\\n\\nlog(sum_k p_k) = log(p_k*) + log(sum_k exp(log p_k - log p_k*))\\n\\nwhich is numerically more stable. Maybe I'm missing something, but I do not understand the advantage of the max-component compared to this, so it seems to me that the max-component trick is a fairly limited contribution in itself.\\n\\n- the main novelty seems to be the regularizer. The authors present experiments to show the effectiveness of it to avoid single-component solutions that may arise from random initialization, which is an interesting point of the paper. The motivation for smoothing on the 2D grid is still somewhat mysterious to me, even though the relationship with SOMs of Appendix B is interesting.\\n\\n- the paper falls a bit short in showing any improvement compared to other baselines. The experiments describe some ablations, but does not really answer some questions: is regularization important if one uses K-means initialization? are there any practical advantages (e.g. learning speed, hyperparamter tuning) compared to standard EM? The authors say that the selection of the starting point in EM is important. This is a fair point, but it also seems solved by K-means. While the authors describe a \\\"tutorial\\\" for choosing the hyper parameters of their method, it still seems fairly manual. So the practical advantage of the method (which probably exists) would benefit from more comparisons. \\n\\n- one of the difficulties in training GMMs comes from learning covariance matrices. While the authors discuss some way to train low-rank models, the only successful results seem to be with diagonal covariance matrices, which seems much easier. For instance, at least a toy example in which the low-rank version is useful would be interesting.\\n\\nOverall, it seems to me that the work is serious, and describes possibly interesting how-tos for training GMMs. The main contribution is to describe a simple method to learn GMMs from random initialization. The main technical novelty seems the regularizer, which seems to work. The method has the advantage of simplicity, but successful results have only been shown with diagonal covariance matrices and it is unclear exactly what is gained over EM+K-means initialization.\", \"other_comments\": [\"negative log-likelihood is used in the results section. It would be good to clarify it somewhere since the paper only mentions \\\"log-likelihood\\\" but report a loss that should be minimized\", \"Section 4.4 \\\"Differently from the other experiments, we choose a learning rate of \\u000f = 0.02 since the original value does not lead to convergent learning. The alternative is to double the training time which works as well.\\\" -> in the first sentence, I suppose it is more a matter of \\\"slow learning\\\" than \\\"non-convergent learning\\\"\"]}"
]
} |
S1gtclSFvr | Neural Phrase-to-Phrase Machine Translation | [
"Jiangtao",
"Feng",
"Lingpeng Kong",
"Po-sen Huang",
"Chong",
"Wang",
"Da",
"Huang Jiayuan",
"Mao",
"Kan",
"Qiao",
"Dengyong",
"Zhou"
] | We present Neural Phrase-to-Phrase Machine Translation (\nppmt), a phrase-based translation model that uses a novel phrase-attention mechanism to discover relevant input (source) segments to generate output (target) phrases. We propose an efficient dynamic programming algorithm to marginalize over all possible segments at training time and use a greedy algorithm or beam search for decoding. We also show how to incorporate a memory module derived from an external phrase dictionary to \nppmt{} to improve decoding. %that allows %the model to be trained faster %\nppmt is significantly faster %than existing neural phrase-based %machine translation method by \cite{huang2018towards}. Experiment results demonstrate that \nppmt{} outperforms the best neural phrase-based translation model \citep{huang2018towards} both in terms of model performance and speed, and is comparable to a state-of-the-art Transformer-based machine translation system \citep{vaswani2017attention}. | [
"machine translation",
"neural",
"translation model",
"machine translation neural",
"present neural",
"novel",
"mechanism",
"relevant input",
"source"
] | Reject | https://openreview.net/pdf?id=S1gtclSFvr | https://openreview.net/forum?id=S1gtclSFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"hTPnoV1iuk",
"HJx5Mz-xiH",
"BJgZqngyqB",
"SJlq8XkRFB",
"HkxaB1i7uS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798750005,
1573028370474,
1571912841332,
1571840849578,
1570119492662
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2479/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2479/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2479/AnonReviewer3"
],
[
"~Chong_Ruan1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper describes how they extend a previous phrase-based neural machine translation model to incorporate external dictionaries. The reviewers mention the small scale of the experiments, and the lack of clarity in the writing, and missing discussion on computational complexity. Even though the method seems to have the potential to impact the field, the paper is currently not strong enough for publication. The authors have not engaged in the discussion at all.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed an end-to-end phrase-to-phrase NMT model (NP2MT). I think the contribution of this paper is incremental and the idea is of less novelty. In general, the model is largely based on the NPMT model, where modification is the introduce of phrases in the source sentences. Then the author proposed the memory module strategy. In the experiments, the performance improves significant when using out of domain dictionary, but less significant for in-domain dictionary. I also have a concerns about the experiments. The dataset used in this paper seems not convincing to me. By my own experience, the performance on small dataset for either LSTM or Transformer is not stable. The authors just tested the model performance on WMT test set. I think at least the WMT training data should be used for training as well. Another question is the details about the training time and decoding time, since the dynamic programming is used.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This submission belongs to the field of machine translation. In particular, it looks at the problem of phrase-to-phrase translation (previously used state of the art approach) using neural network approaches. The main idea behind this paper is to use a segmental (analogue of phrases) form of neural networks on the source side and attention mechanism to align those segments with segments generated on the target size. This submission additionally describes how an external dictionary can be incorporated using a heuristic approach. I believe this submission could be of wide interest to the machine learning community. I find experimental validation to be satisfactory whilst presentation unsatisfactory for the following reasons:\\n\\n1) notation\\n\\nGiven that you are dealing with two sets of sequences (source, target side), segments on both sides, attention linking both sides I find it strange that the notation used is not carefully introduced and clearly explained. What is ${\\\\bf g}_{{\\\\bf z}<z_k}$ using precise mathematical language? How it is different from ${\\\\bf g}_{{\\\\bf z}<k}$ and what that is? The same for ${\\\\bf y}_{<t}^{z_k}$, a*, d* and all other variables. In order to help the reader understand your approach it is fundamental to be precise and not ambiguous about each (!) symbol you are using. \\n\\n2) Algorithm 1, 2 and Figure 1\\n\\nThe algorithmic description was meant to help the reader to understand the process. Unfortunately I have to disagree that is has accomplished this purpose. Please make sure you have unambiguously explained every single term, explicitly say what you are running argmax over, etc. Please also make sure you are discussing/describing the algorithms/figures in your submission. Given the non-trivial nature, lack of proper introduction into the notation used, you cannot simply point the reader to it and not discuss it.\", \"minor_comments\": \"Please refrain from using \\\"vanilla model\\\" unless you can cite a publication defining exactly what that is.\\nPlease explain how did you derive dictionary.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a phrase-based encoder-decoder model for machine translation. The encoder considers all possible phrases (i.e. word sequences) up to a certain length and compute phrase representations using bidirectional LSTMs from contextual word embeddings computed with another bidirectional LSTM layer. The decoder also considers possible segmentations and computes contextual representations for the previously generated segments. Each word in the current segment is generated by a Transformer model by attending to all phrases in the source sentence. The authors present a dynamic programming method for considering all possible segmentations in decoding. They also present a method for incorporating a phrase-to-phrase dictionary built by Moses into the decoding process.\\n\\nI like the idea of phrase-to-phrase translation and the relatively simple architecture proposed in the paper. At the moment, however, I am not quite sure how practical their approach is. One reason is the experimental setting. Both of the datasets used in the experiments are quite small and it is not clear how the proposed model performs when several millions of sentence pairs are available for training. \\n\\nAnother reason is that the computational cost of the proposed model is not really clear. The authors state that it is much more efficient than NPMT but it is not clear how it compares to the standard Transformer approach. It seems to me that the computational cost of their model is highly dependent on the value of P (maximum length of phrases). \\n\\nAt first, I thought the decoder was implemented with LSTMs, but I realized that it was actually implemented with a Transformer by reading the appendix. I think this should be explained in the main body of the paper. I am also wondering how the authors\\u2019 model compares to a standard seq-to-seq model whose decoder is implemented with a Transformer.\\n\\nThe equation in section 2.2 seems to suggest that the model prefers segmentations with small numbers of segments. I am wondering if there is any negative effect on the translation quality.\", \"here_are_some_minor_comments\": \"p.2 valid of -> valid\\np.4 lookup -> look up?\\np.4 forr -> for\\np.4 indict -> indicate?\\np.5 Table 1 -> Table 1\"}",
"{\"comment\": \"Thanks for your nice work! Now transformers prevail, it is really surprising to see traditional seq2seq models have such impressive performance.\\nIs it possible to use a Transformer encoder instead? I guess the performance will further improve.\", \"title\": \"Is it possible to extend the encoder to Transformers?\"}"
]
} |
H1MOqeHYvB | At Your Fingertips: Automatic Piano Fingering Detection | [
"Amit Moryossef",
"Yanai Elazar",
"Yoav Goldberg"
] | Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes. We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results.
In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN).
For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q | [
"piano",
"fingering",
"dataset"
] | Reject | https://openreview.net/pdf?id=H1MOqeHYvB | https://openreview.net/forum?id=H1MOqeHYvB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ZkfLnoKAp",
"BJxPvjsJjr",
"S1lelcj1iS",
"BJevc4E2tS",
"ryeHZxXPYB",
"rkxDbgiWYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749976,
1573006175510,
1573005800513,
1571730574570,
1571397629506,
1571037182839
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2478/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2478/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2478/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2478/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper shows an automatic piano fingering algorithm. The idea is good. But the reviewers find that the novelty is limited and it is an incremental work. All the reivewers agree to reject.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Main reply to Review #1\", \"comment\": \"Thank you for your review.\", \"we_would_like_to_address_your_review\": \"1. We are not aware of any works using CycleGAN for pose estimation, or any works using sim2real in that manner (of fine-tuning the algorithms). It would be great to get some citations.\\nRegarding the novelty of our method, here is a list of what we think are the core novel ideas of our method:\\na. We show how it is possible to segment each key of the piano.\\nb. We show the failures of current pose detection models, and devise a new, general way to address such failures, that we show is robust - and for us, reduces 50% of the accumulative error.\\nc. We design a new way to align between MIDI and piano video recordings, which is accurate up to the video sampling rate.\\nd. We design an algorithm to assume \\\"collisions\\\" between the detected fingers in multiple frames, and the piano keys, from very noisy data.\\n\\n2. While we agree more baselines are better to have, in our experiments we aim to convince the reader that our data is indeed valuable, and not just noise, and we think it manages to show that. We do not aim to create a new method for automatic fingering in this work.\\nWhile indeed the proposed approach trained on the PIG dataset gets slightly worse results from the SOTA on the PIG dataset, the case we are trying to make is not that our method is better, it is that our data is good and valuable, and can be used for future developed methods.\\n\\n\\nAgain, we would like to thank you, and perhaps get your feedback on what should be improved in this paper? What experiments/analysis you would like to see?\"}",
"{\"title\": \"Main reply to Review #2\", \"comment\": \"Thank you for your review.\", \"we_would_like_to_address_your_review\": \"- **... the rest of the paper is somewhat incremental/engineering piece ...** - while true, that engineering was involved, we think that in most steps of our pipeline we introduce new, novel methods / usages to tackle different issues. (described further in the rest of this comment).\\n- **... which depends somehow on previous works (see Nakamura,2019)** - We don't see what you are referring to, as in their work they release a manually annotated dataset, while in this work we devise an unsupervised method to extract such annotations from videos and MIDI files.\\n- **I fail to see much novel scientific contribution to the area of research (apart from the dataset) and I\\u2019m not sure whether there are enough scientific technical advancements.** - I will now list the novel parts of this work, that were used in order to get a good dataset:\\n1. We show how it is possible to segment each key of the piano.\\n2. We show the failures of current pose detection models, and devise a new, general way to address such failures, that we show is robust - and for us, reduces 50% of the accumulative error.\\n3. We design a new way to align between MIDI and piano video recordings, which is accurate up to the video sampling rate.\\n4. We design an algorithm to assume \\\"collisions\\\" between the detected fingers in multiple frames, and the piano keys, from very noisy data.\\n\\n- **Furthermore, the experimental setting is somewhat limited, and it is not clear whether results are statistically significant.** - You are correct that we did not perform a statistical significance test, we will do such for the final version of the paper. We show 50% error reduction between previous SOTA and human agreement results. The experimental is not meant to devise a new method for piano fingering, instead, it is design to convince the reader that our data is indeed valuable, and not just noise, and we think it manages to show that.\\n\\nAgain, we would like to thank you, and perhaps get your feedback on what should be improved in this paper? What experiments/analysis you would like to see?\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed an automatic piano fingering algorithm, that accepts YouTube videos and corresponding MIDI files and outputs fingering prediction for each note. The claimed contribution is two-fold: First, they proposed the algorithm, and second, they claim that the algorithm can be used to automatically generate large datasets for piano fingering problems. The motivation is clearly stated and convincing. The overall algorithm is mainly described.\\n\\nHowever, I would like to reject this paper. Major issues:\\n\\n* Some key information is missing in Section 3.6, which is the only section that shows technical details: What is X_{n_k}? How is that related to the estimated finger poses? What is the function f in the definition of function g? (Also, it would be helpful to label the equations for clarification.) Are you doing Bayesian inference? With the key information missing, it is hard to fully understand the remaining technical details in this section. \\n* Their experimental results cannot properly support their claims. In Section 4.2, the authors try to show the strength of their proposed piano fingering algorithm by comparing their automatically annotated dataset APFD with an existing manually annotated dataset PIG. The authors showed the evaluation results of models trained and fine-tuned with different datasets. However, this is not an acceptable comparison for me, due to several reasons.\\nFirst, in order to show the strength of automatic piano fingering prediction, it is much better to directly run the prediction algorithm on datasets with known labels. According to the related work section, there is at least one existing work by Takegawa et al. that uses videos and MIDI files to detect piano fingering. Can you compare your algorithm with theirs? \\nSecond, it is essentially unreliable to compare two datasets by comparing the performance of two prediction models, as there are too many implementation details that are almost impossible to control. \\nThird, it is not clear how we should compare the testing errors in Table 2. Yes, a model initially trained on PIG and fine-tuned on APFD may perform better than a model trained merely on APFD, but does that suggest anything (and the advantage is just 0.4%)? Similarly, the experimental result that an MLP model initially trained on APFD then fine-tuned with PIG works better than an HMM model that is trained with PIG data alone cannot prove anything. There are too many possible reasons that may lead to this experimental result. \\n* How is this method more attractable than the existing ones? There are neither experimental comparisons nor high-level justifications of why the existing algorithms are not applicable to the given scenario. In Section 2, although the authors described a good number of existing work on piano-fingering and their drawbacks, they failed to point out the strength of their paper as a comparison. As a result, the strength of this paper is still unclear after reading this section. How does this paper avoid the drawbacks of these previous papers? \\n* The writing of this paper needs to be greatly improved. It takes a lot of effort to literally understand this paper: There may be missing parts, misplaced clauses, and broken logic between sentences. I have listed several examples in the minor issues part.\", \"minor_issues\": [\"In the first paragraph of Section 1: The sentence before 'In practice ...' is incomplete.\", \"In the last paragraph of Section 1: Missing brackets for \\\\textsection 3.3 and \\\\textsection 3.4. Also, 'on A new dataset we introduce' should be 'on THE new dataset we introduce'.\", \"On page 3, the sentence 'In this work, we continue the transition of search-based methods that optimize a set of constraints with learning methods that ...' is not making sense to me. Do you mean that your work is an extension of search-based methods, or do you mean that your work is not a search-based method? Also, are you optimizing a set of constraints, or optimizing with a set of constraints?\", \"On page 3, the last sentence in Section 2: '... and adapt their model to compare TO our ...' should be '... and adapt their model to compare WITH our ...'. The last part of this sentence is also a bit confusing: How do you compare a model with a dataset?\", \"On page 4, the paragraph starting with 'MIDI files': The first two sentences are almost the same; the period between them is missing. I guess one of them should be deleted. The following sentences in this paragraph are also subject to grammatical errors. For example, the sentence 'It consists of a sequence of events ... to carry out, WHEN, and allows for ...' is not a complete sentence. 'We only use videos that come along with a MIDI file' -> 'We only use videos that come along with MIDI files'.\", \"On page 5, last paragraph in Section 3.3: 'highest probability defections' -> 'highest probability detections'.\", \"The last paragraph on Page 5: 'Using off-the-shelve ...' -> 'Use off-the-shelf ...'.\", \"In Section 4.2.1, the corresponding result is Table 2, instead of Table 1.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is a nice piece of works which clearly articulates the objective and the subsequent discussion. The focus of the paper--i.e. disclose the difficulties of piano fingering data annotation and the proposal of automating this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision DNN-based algorithms \\u2014although not really mainstream, it does provide some practical insights using a couple of experimental settings (piano fingering model and prediction) to help the readers.\\n\\nI really enjoyed reading this paper. I think that it can be considered a relevant and interesting piece of work, very well written and clear. Furthermore, providing new benchmarks/datasets/competitions for the AI community is always refreshing. Also, the results seem believable and solid, and potentially useful. \\n\\nMy only concern is that, although the rationale and utility of the paper is clear, the rest of the paper is somewhat incremental/engineering piece which depends somehow on previous works (see Nakamura,2019). I fail to see much novel scientific contribution to the area of research (apart from the dataset) and I\\u2019m not sure whether there are enough scientific technical advancements. Furthermore, the experimental setting is somewhat limited, and it is not clear whether results are statistically significant.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2478\", \"review\": [\"<Strengths>\", \"This paper addresses an interesting and practically important problem: detection of piano fingering from videos and MIDI files. Fingering is a valuable source for piano learners and has to be manually annotated otherwise. The proposed approach is automatic and low-cost from playing videos.\", \"This paper collects a large-scale dataset for piano-fingering named APFD, including 90 finger-tagged pieces with 155K notes.\", \"The paper reads very well.\", \"<Weakness>\", \"1. One major weakness of this work is lack of technical novelty.\", \"As described in section 3 in detail, the proposed approach consists of a sequence of well-known techniques (e.g. Faster R-CNN for hand detection and CycleGAN for finger pose estimation) and is largely based on lots of heuristics in every step of the procedure.\", \"Thus, the method may be practically viable but bear little technical novelty.\", \"2. Experimental results are rather weak.\", \"Only a single baseline is used in the existing PIG dataset, while no baseline method is compared for the new APFD dataset, for which more baselines may need to be implemented and compared.\", \"The proposed approach (64.1) is slightly worse than the previous SOTA (64.5) in the PIG dataset, although it is improved by fine-tuning with the APFD data that are not available for the previous SOTA. Thus, no experimental evidence is presented in the paper to convince that the proposed approach is better than existing ones.\", \"<Conclusion>\", \"Although this work is practically promising, my initial decision is \\u2018reject\\u2019 mainly due to lack of technical novelty and limited experiments.\"]}"
]
} |
S1e_9xrFvS | Energy-based models for atomic-resolution protein conformations | [
"Yilun Du",
"Joshua Meier",
"Jerry Ma",
"Rob Fergus",
"Alexander Rives"
] | We propose an energy-based model (EBM) of protein conformations that operates at atomic scale. The model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning. To evaluate the model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design. The model achieves performance close to that of the Rosetta energy function, a state-of-the-art method widely used in protein structure prediction and design. An investigation of the model’s outputs and hidden representations finds that it captures physicochemical properties relevant to protein energy. | [
"energy-based model",
"transformer",
"energy function",
"protein conformation"
] | Accept (Spotlight) | https://openreview.net/pdf?id=S1e_9xrFvS | https://openreview.net/forum?id=S1e_9xrFvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SK9ARl-vUF",
"Bylp_tzhsr",
"ByxLpbGhjr",
"ByeQ4s-3oS",
"SJerAWZhiB",
"BJldk-Z2sS",
"Hkg4BP-N5H",
"SJxY2dsAFB",
"Skgupl9TYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749948,
1573820788889,
1573818814126,
1573817130700,
1573814733318,
1573814496145,
1572243259997,
1571891377008,
1571819712168
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2477/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2477/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2477/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2477/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2477/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2477/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2477/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2477/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper proposes a data-driven approach to learning atomic-resolution energy functions. Experiment results show that the proposed energy function is similar to the state-of-art method (Rosetta) based on physical principles and engineered features.\\n\\nThe paper addresses an interesting and challenging problem. The results are very promising. It is a good showcase of how ML can be applied to solve an important application problem. \\n\\nFor the final version, we suggest that the authors can tune down some claims in the paper to fairly reflect the contribution of the work.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for addressing my comments - as reflected in my score ( I'll stay by my original accept-score) I find this paper interesting and hope to see it at ICLR2020.\"}",
"{\"title\": \"Summary response\", \"comment\": \"We would like to express sincere appreciation to the reviewers for insightful and detailed comments and for the constructive nature of their feedback. We appreciate their interest in our work and the unanimously positive evaluation.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for constructive criticism and questions. This feedback has been very helpful and we've made a number of alterations to the text in response.\\n\\n>The paper several times seems to suggest that energy functions derived from physical knowledge are problematic (e.g. abstract)\\n\\nWe respectfully disagree with the reviewer on this point. The abstract states: \\u201cThe model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning.\\u201d To the best of our knowledge, this is an accurate statement about current energy functions. Our argument is not about the absolute performance of physical or learned approaches, but rather the ability of learning-based methods to approach the performance of knowledge-based methods with less development effort, thus indicating potential for this direction. \\n\\n>My main (slightly minor) concern about the paper is the obsession with discarding years of learned knowledge about handcrafted energy functions with fully learned functions. It seems to me that combining domain knowledge with a learned model should close the last small gap (and presumably surpass) the performance of e.g Rosetta.\\n\\nWe strongly agree that the possibility of combining energy functions and features learned by neural networks with existing energy functions that incorporate domain knowledge is a promising direction for future work. In the introduction we write \\u201cFurthermore, since energy functions are additive, terms learned by neural energy-based models can be naturally composed with those proposed by expert knowledge.\\u201d We note that this possibility parallels (and might be seen as an extension of) the long line of work combining statistical potential terms with physical energy terms.\", \"we_revise_the_above_statement_as_follows\": \"\\u201cSince energy functions are additive, terms learned by neural energy-based models can be naturally composed with those derived from physical knowledge.\\u201d\\n\\n>Given that the paper is motivated by fully learning an energy function from data without injecting any prior knowledge i would like the authors comment on the construction of the q(x|c) distribution. This is based on essentially a contingency table between Phi/Psi/Chi angles and to me seems like injecting prior knowledge about physically possible rotamer configurations into the learned procedure? \\n\\nThe rotamer library we use [1] has been fit from data. We note that this library is only used to sample rotamer configurations and that the energy function does not directly include terms or features analytically derived from rotamer probabilities.\\n\\nWe also noticed in the related work section where we discuss the work of Leaver-Fay et al. [2] we had stated \\u201cour method automatically learns complex features from data without prior knowledge.\\u201d We apologize if this contributes to an impression that a major concern of the paper is learning without prior knowledge. We revise this to \\u201cour method automatically learns complex features from data.\\u201d\\n\\n>However the experimental results are always slightly worse than the Rosetta energy function and I (strongly) suggest that the authors rephrase those statements to reflect that.\\n\\nWe thank the reviewer for pointing out this concern which we agree is valid, and we have revised the paper accordingly to state \\u201cour model achieves performance close to that of the Rosetta energy function\\u201d throughout. \\n\\n>Why are the only results for 16 amino acids in the table ?\\n\\nAlthough we state in the main text that the numbers reported in Table 3 for Rosetta are from the paper of Leaver-Fay et al. [2], we overlooked mentioning this in the caption to the table. We've rectified this. We only present results for those 16 amino acids because they are the only ones included in the Leaver-Fay et al. paper.\\n\\n>Secondly, just naively counting the Atom Transformer are better than Rosetta in 11 of 16 amino acids - this seems slightly at odds with the main result where the Atom Transformer is performing slightly worse than Rosetta?\\n\\nWe also agree it is surprising that our method performs better on many of the amino acids. We observe that our performance is worse on some of the more common amino acids.\\n\\n>The notation in section 3 and 4 is slightly confusing\\n\\nThe angular coordinates and Cartesian coordinates are interchangeable. Since the model takes as input Cartesian coordinates of the atoms, we find it more expressive and convenient to write the importance distribution in this form. We have revised the paper to clarify this.\\n\\n---\\n[1] Shapovalov and Dunbrack. \\\"A smoothed backbone-dependent rotamer library for proteins derived from adaptive kernel density estimates and regressions.\\\" (2011)\\n\\n[2] Andrew Leaver-Fay, et al. \\\"Scientific benchmarks for guiding macromolecular energy function improvement.\\\" (2013)\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for thoughtful and detailed comments.\\n\\n>The study claims to provide an energy predictor in general while training is performed on rotamer recovery of a single amino acid. These claims should be downplayed and what was shown at most is that this predictor can be applied to predict energy for a restricted problem of the rotamer recovery task. \\n\\n>A reader gets a wrong impression at the beginning of the manuscript that the study tries to solve general and classical rotamer prediction for an entire protein. It becomes clear only at the very end that the study does not try to resolve all rotamer conformations in a protein, it can only predict one rotamer at the time given the atoms surrounding the target residue are correct. This should be explained in the beginning that the study does not attempt combinatorial side chain optimization for a fixed backbone;\\n\\nWe apologize that this was not clear -- we endeavored to make explicit that this paper only addresses the rotamer recovery problem and not the more general combinatorial side chain optimization and sequence recovery problems. For example, the abstract states: \\u201cTo evaluate our model, we benchmark on the rotamer recovery task, a restricted problem setting used to evaluate energy functions for protein design.\\u201d And in the introduction: \\u201cOur results serve as a gateway for progress on the more general problem settings of combinatorial side chain optimization for a fixed backbone (Tuffery et al., 1991; Holm & Sander, 1992) and the inverse folding problem (Pabo, 1983)\\u2026\\u201d.\\n\\nHowever, we acknowledge that the wording could be clearer and have rewritten them as follows. We hope that this helps.\\n\\n\\u201cTo evaluate our model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design.\\u201d\\n\\n\\u201cOur results open for future work the more general problem settings of combinatorial side chain optimization for a fixed backbone (Tuffery et al., 1991; Holm & Sander, 1992) and the inverse folding problem (Pabo, 1983)\\u2026\\u201d\\n\\n>In-depth description of the neural network is required at least in supplemental materials, so that the study could be replicated. Ideally, a working application and training protocol code would be available.\\n\\nWe intend to make the code available before the ICLR conference is held. We\\u2019ve also added an appendix with a description of the network, baselines, and training algorithm.\\n\\n>No techniques against overtraining are discussed. How was the model validated?\\n\\nWe added a brief description in the text. In short, we used a held-out 5% subset of the training data as a validation set.\\n\\n>Small issues\\n\\nWe appreciate very much the careful attention to proofing errors in the text and we have made the corrections identified.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the comments and helpful questions.\\n\\n>Are there newer baselines the authors can compare to?\\n\\nRosetta is considered to be state of the art for protein energy functions. Additionally we\\u2019ve updated the paper to include an additional baseline in the form of a graph neural network. Its performance is below that of the Transformer but above that of the other baselines.\\n\\n>For example, the related works section discusses some recent works\\n\\nThere are a number of possible architectures that might make sense for this problem, but to our knowledge none of them have been studied previously for rotamer recovery. We think it is a very interesting direction for future work to explore what inductive biases and alternative model architectures might improve performance and generalization.\\n\\n>I am having a hard time interpreting the numbers in the experimental results (Table 1 and Table 2). I understand that they are rotamer recovery rates and that the max would be 100%.\\n\\nWe\\u2019re not aware of an estimate of the theoretically attainable maximum performance for this task, but we expect it to be well below 100%. This is because the side chains are free to adopt a variety of conformations when they are less constrained - especially at the surface of the protein where there is not tight packing. We expect that the Rosetta numbers represent very good performance (since Rosetta has been used successfully to design new functional proteins).\\n\\n>I can\\u2019t tell what amount of difference is significant\\n\\nWe don\\u2019t have a principled way to determine error bars for the Rosetta results. We note that the gap in performance between score12 and ref2015 (the newer version of the energy function) is approximately 1-1.4% for rotamer trials and 1-2.2% for rt-min. These performance gains are regarded as being significant, thus providing some objective scale for our results. We have revised the paper to state our model achieves performance close to that of the Rosetta energy function rather than comparable to Rosetta.\\n\\n>Figure 1 should have a caption clarifying notation etc. \\n\\nWe agree this is an oversight in the original draft and we\\u2019ve added a caption to Figure 1.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2477\", \"review\": \"The paper proposes an energy based model (learned using the transformer architecture) to learn protein conformation from data describing protein crystal structure (rather than one based on knowledge of physical and biochemical principles). To test the efficacy of their method, the authors perform the task of identifying rotamers (native side-chain configurations) from crystal structures, and compare their method to the Rosetta energy function which is the current state-of-the-art.\\n\\n+ Presents an important and emerging application of neural networks\\n+ Relatively clearly written, e.g. giving good background on protein conformation and related things to a machine learning audience\\n\\n-The baselines that are compared to are set2set and Rosetta. Are there newer baselines the authors can compare to? For example, the related works section discusses some recent works.\\n-I am having a hard time interpreting the numbers in the experimental results (Table 1 and Table 2). I understand that they are rotamer recovery rates and that the max would be 100%. But I can\\u2019t tell what amount of difference is significant. \\n-Figure 1 should have a caption clarifying notation etc.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"Summary of the paper\", \"The authors propose a predictive model based on the energy based model that uses a Transformer architecture for the energy function. It accepts as input an atom and its neighboring atoms and computes an energy for their configuration. The input features include representations of physical properties (atom identity, atom location within a side chain and amino-acid type) and spatial coordinates (x, y, z). A set of 64 atoms closest to the beta carbon of the target residue are selected and each is projected to a 256-dimensional vector. The predictive model computes an energy for the configuration of these 64 atoms surrounding a residue under investigation. The model is reported to achieve a slightly worse but comparable performance to the Rosetta energy function, the state-of-the-art method widely used in protein structure prediction and design. The authors investigate model\\u2019s outputs and hidden representations and conclude that it captures physicochemical properties relevant to the protein energy in general.\", \"Strengths\", \"A very interesting contribution to structural biology with a non-trivial application of deep learning.\", \"The study is accompanied with structural biology interpretation of the designed energy predictor. Usually similar studies do not attempt and are limited with prediction rates and mathematical analysis.\", \"Appropriate background is provided for non-experts in protein structural biology. The paper is clear and well written.\", \"An adequate literature review is provided relative to the problem. Bird-eye view is given for future direction with justified optimism.\"], \"weaknesses\": [\"The study claims to provide an energy predictor in general while training is performed on rotamer recovery of a single amino acid. These claims should be downplayed and what was shown at most is that this predictor can be applied to predict energy for a restricted problem of the rotamer recovery task.\", \"A reader gets a wrong impression at the beginning of the manuscript that the study tries to solve general and classical rotamer prediction for an entire protein. It becomes clear only at the very end that the study does not try to resolve all rotamer conformations in a protein, it can only predict one rotamer at the time given the atoms surrounding the target residue are correct. This should be explained in the beginning that the study does not attempt combinatorial side chain optimization for a fixed backbone;\", \"In-depth description of the neural network is required at least in supplemental materials, so that the study could be replicated. Ideally, a working application and training protocol code would be available.\", \"No techniques against overtraining are discussed. How was the model validated?\"], \"small_issues\": [\"function f_theta(A) is used before it is introduced.\", \"noun is missing: using our trained with deep learning.\", \"articles missing: that vary from amino acid to amino acid.\", \"KL abbreviation not explained\", \"MCMC abbreviation not explained\", \"wrong artile: that processes the set of atom representations.\", \"misprint: resolution fine r than 1.8\", \"wrong word, has to be \\u201clower\\u201d: sequence identity greater \\u02da\", \"everyday word: break out\", \"abbreviation not explained: t-SNE\", \"misprint case: Similarly, In contrast to our work\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes an Energy-Based-Model (EBM) for scoring the possible configurations of amino acid side chain conformations in protein structures with known amino acid backbone structure. The energy of the side-chain conformation (the chi-angle) for a given amino acid in the structure is calculated as a function of a local neighbourhood of atoms (A), where each atom is embedded into a 256d vector using its cartesian coordinates, atom identity, atom side-chain position and amino acid identity. The model is trained using approximate likelihood where the model samples are generated using precalculated table (from literature) of possible Chi angles conformations conditioned on the back-bone amino acid identity and back-bone angles. The results seem comprehensive comparing the transformer based energy function parameterization with two sensible baselines as well as the Rosetta energy function which is the de facto standard tool for these types of calculations. Using rotamer recovery accuracy as the benchmark measure the empirical results are close to performance as the Rosetta energy model however always slightly worse. Further visualizations of the energy levels for different Chi angles seems to support that the learned energy function captures well known characteristics of the rotamer configuration energy landscape.\", \"score\": \"Overall I think that the paper is solid and tackles an interesting problem of learning energy functions for physical systems from data. The experimental results are comprehensive and mostly supports the claims made in the paper. My main (slightly minor) concern about the paper is the obsession with discarding years of learned knowledge about handcrafted energy functions with fully learned functions. It seems to me that combining domain knowledge with a learned model should close the last small gap (and presumably surpass) the performance of e.g Rosetta. Combining I think the paper should be accepted at the ICLR.\\n\\nComment/Questions:\\n\\nMotivation\\nQ1.1) Paper motivation: The paper several times seems to suggest that energy functions derived from physical knowledge are problematic (e.g. abstract). For many physical systems I don\\u2019t think this is true and would like the author's comments on why learned energy functions are preferable - arguably they can only capture properties present in the data and not any prior knowledge? \\n\\nQ1.2) Given that the paper is motivated by fully learning an energy function from data without injecting any prior knowledge i would like the authors comment on the construction of the q(x|c) distribution. This is based on essentially a contingency table between Phi/Psi/Chi angles and to me seems like injecting prior knowledge about physically possible rotamer configurations into the learned procedure? \\n\\nMethod / Experimental Results:\\nQ2.1) With respect to the primary results in table 1) and table 2). The authors claim comparable results to the Rosetta Energy function (page 5. Sec 4.3, page 9, sec 6). However the experimental results are always slightly worse than the Rosetta energy function and I (strongly) suggest that the authors rephrase those statements to reflect that.\\n\\nQ2.2 With Respect to Table 3) Firstly, Why are the only results for 16 amino acids in the table ? Secondly, just naively counting the Atom Transformer are better than Rosetta in 11 of 16 amino acids - this seems slightly at odds with the main result where the Atom Transformer is performing slightly worse than Rosetta?\\n\\nClarity\\nQ3): The notation in section 3 and 4 is slightly confusing. Especially I think q(x|c) is slightly misleading since it, (to my understanding) is a conditional distribution over Chi, conditioned on Psi/Phi/AminoAcid and not rotamer (x) and surrounding molecular context (c).\"}"
]
} |
BkluqlSFDS | Federated Learning with Matched Averaging | [
"Hongyi Wang",
"Mikhail Yurochkin",
"Yuekai Sun",
"Dimitris Papailiopoulos",
"Yasaman Khazaeni"
] | Federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device, decoupling the ability to do model training from the need to store the data in the cloud. We propose Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs. FedMA constructs the shared global model in a layer-wise manner by matching and averaging hidden elements (i.e. channels for convolution layers; hidden states for LSTM; neurons for fully connected layers) with similar feature extraction signatures. Our experiments indicate that FedMA not only outperforms popular state-of-the-art federated learning algorithms on deep CNN and LSTM architectures trained on real world datasets, but also reduces the overall communication burden. | [
"federated learning"
] | Accept (Talk) | https://openreview.net/pdf?id=BkluqlSFDS | https://openreview.net/forum?id=BkluqlSFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3-joY3kfZ3",
"pGEb1YoxcR",
"aVDw9i8E6I",
"HJgiqtU9sr",
"rke7FaeciB",
"B1xIeO3viS",
"BJxZaDnwsH",
"Syg7FvhwiH",
"SkexqR4GoB",
"SJlZzmfeiB",
"BygfWezloB",
"HJlE_doTKH",
"rkgJ17ZpFH"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1580015038136,
1578370903872,
1576798749920,
1573706131441,
1573682554544,
1573533677925,
1573533625295,
1573533563027,
1573174919657,
1573032713151,
1573031930244,
1571825772363,
1571783382953
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2476/Authors"
],
[
"~Martin_Jaggi1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2476/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2476/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2476/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2476/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2476/Authors"
],
[
"~Nan_Jiang7"
],
[
"~Anthony_Wittmer1"
],
[
"ICLR.cc/2020/Conference/Paper2476/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2476/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2476/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Relation of OT fusion to prior work\", \"comment\": \"Hi Martin & Sidak,\\n\\nThank you for bringing your work to our attention. We will add OT fusion to the discussion in the camera ready version of our paper. We also wish to mention that there is a prior work [1] that proposed a similar \\\"align and average\\\" framework that we build on in FedMA. We would appreciate if you can add discussion of [1] and FedMA to your paper.\\n\\nBest regards,\\nAuthors\\n\\n[1] M. Yurochkin et al, Bayesian Nonparametric Federated Learning of Neural Networks, ICML 2019.\"}",
"{\"title\": \"relation to 'Model Fusion via Optimal Transport'\", \"comment\": \"dear authors\\ncongrats on your nicely written paper on this cool application! \\nwe have a similar&simultaneous approach in the NeurIPS 2019 optimal transport workshop https://arxiv.org/abs/1910.05653 , where we also considered federated learning as an application. we also added some additional baselines for the standalone merging/fusion operator.\\nwould you mind adding it to the discussion for your camera ready version?\\nthanks in advance!\\nmartin & sidak\"}",
"{\"decision\": \"Accept (Talk)\", \"comment\": \"The authors presented a Federate Learning algorithm which constructs the global model layer-wise by matching and averaging hidden representations. They empirically demonstrate their method outperforms existing federated learning algorithms\\n\\nThis paper has received largely positive reviews. Unfortunately one reviewer wrote a very short review but was generally appreciative of the work. Fortunately, R1 wrote a detailed review with very specific questions and suggestions. The authors have addresses most of the concerns of the reviewers and I have no hesitation in recommending that this paper should be accepted. I request the authors to incorporate all suggestions made by the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"We thank Reviewer 1\", \"comment\": \"We really appreciate your careful review and encouraging response to our rebuttal. We will elaborate on the connection to optimal transport that we mentioned in our response in the final version of the paper. We will also more smoothly integrate the section on \\u201cSolving matched averaging\\u201d (including the presentation of BBP-MAP) into the flow of the paper.\"}",
"{\"title\": \"Thanks for rebuttal!\", \"comment\": \"I appreciate the careful reply and revisions of Sec 2.1 especially. I think the novelty is above-the-bar (esp if connections to optimal transport could be made more explicit). My technical concern #1 is resolved thanks to a clear response from authors that the method handles the concern. My concern #2 (about motivation for BBP-MAP) is mostly resolved but for some presentation issues (I think perhaps another few editing passes to make sure the presentation of BBP-MAP in Sec 2.1 is clear and well-motivated might be needed, because it still feels like the current simplicity-focused justification is a bit weak).\\n\\nI'm happy to accept the paper.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the thorough review and feedback. We address the concerns raised below.\\n\\nWe extended paragraph \\\"Solving matched averaging\\\" in Section 2.1 clarifying how our approach can learn the size of the global model and the motivation for using BBP-MAP. We hope this addresses both of the technical concerns. To summarize, our approach does allow global model to learn more units than the client models and in your example with \\\"horse hooves\\\" and \\\"snake skin\\\" it will do the right thing, i.e. it will not match \\\"horse hooves\\\" to \\\"snake skin\\\" and instead increase the size of the global model keeping both. In our experiments, when simulating heterogeneous CIFAR-10 partitioning, we use Dirichlet distribution with a small concentration parameter to partition each of the class examples across clients - this results in very diverse class distributions across clients (some might even completely lack several classes). Please see the \\\"Experimental Setup\\\" paragraph in Section 3 for details. Our experiments demonstrate that FedMA performs well under the diverse class distributions scenario. The global neural net found by FedMA is bigger than local models, but only mildly - please see rows \\\"Model growth rate\\\" in Tables 1 and 2.\\n\\nIn our extended \\\"Solving matched averaging\\\" paragprah we also gave a general recipe for performing matched averaging with adaptive global model size. The idea is to consider an extended cost matrix and iteratively apply the Hungarian (Munkres) algorithm. Iterations are needed to handle multiple unknown permutation matrices, but if we only have two neural networks, then a single run of the Hungarian algorithm with our cost matrix is sufficient. We also clarified the motivation for using BBP-MAP: it simply gives us a way to pick cost matrix, matching threshold and model size penalty simultaneously, based on the model of Yurochkin et al. Otherwise, their algorithm is a special case of our framework.\\n\\n>>> Regarding the novelty\\n\\nWe believe our work has both practical and methodological contributions in comparison to Yurochkin et al. We would like to emphasize that LSTMs matching presented in this paper is special and differs from MLPs and CNNs. In eq. (6) we show that it leads to a quadratic assignment problem due to permutation applied on both sides of the hidden-to-hidden weights in the LSTM cell. Our solution is to use linear assignment corresponding to input-to-hidden weights to find the permutations, but account for the special permutation structure of the hidden-to-hidden weights when averaging them. For the CNNs, we showed that it is essentially same as the MLPs and we agree that in this case it is a relatively straightforward extension of Yurochkin et al. Overall, we think that combining and formalizing permutation invariance structure of all key architectures in the language of permutation matrices (instead of Bayesian modeling) is also a valuable contribution. For example, our formalism shows an interesting connection to Optimal Transport, i.e. eq. (2) is very similar to Wasserstein barycenter formulation, while the quadratic assignment arising in LSTMs is related to Gromov-Wasserstein barycenters. This connection may lead to better estimation of permutations for matched averaging, replacing BBP-MAP.\\n\\nAnother methodological contribution of our work is the combination of layer-wise matching and local re-training. In Figure 1 we experimentally showed that simply extending approach of Yurochkin et al. to CNNs (labeled One-Shot Matching on the plots) only works for basic architectures. FedMA enables efficient federated learning of modern architectures and demonstrates strong empirical performance on more challenging datasets in comparison to Yurochkin et al.\\n\\n>>> Minor Presentation Concerns\\n\\nWe have added theta definition before the equation.\\n\\nOne round of FedMA requires communication rounds equal to the number of layers in a network. At each of these communications, clients send weights of a single layer, then master matches and averages them and broadcasts back the resulting global weights for this one layer. To summarize, one round of FedMA requires \\\"number of layers\\\" communications, but the total message size is equal to one communication round of FedAvg, i.e. size of the full model. FedMA with communication is basically repeating rounds of FedMA with one important detail. Recall that FedMA learns the global model size, hence the global neural net is usually slightly bigger than the local models. To keep the size of the local models constant when proceeding to the next FedMA round, we re-set local models to the \\\"subsets\\\" of the global model that they were matched to. This has no significant communication overhead as permutation matrices needed to obtain those subsets can be easily stored and broadcasted as lists of integers when running a FedMA round, and each client naturally has a global model copy by the end of each FedMA round.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the feedback and provide answers to the raised concerns below.\\n\\n>>> Additional details on BBP-MAP and adaptive global model size\\n\\nWe extended Section 2.1 with a general procedure for matched averaging with adaptive global model size, where BBP-MAP can be seen as a specific way to carry out the optimization. The idea behind the adaptive global model size is to introduce additional columns in the cost matrix to avoid \\\"poor\\\" matches while penalizing the model size. Please see eq. (3) and the surrounding discussion.\\nRegarding the \\\"best possible\\\" permutation, for two neural nets Hungarian algorithm will find the global optima, but when averaging multiple neural nets, the iterative procedure we describe is only guaranteed to find a local optima.\\n\\n>>> Can you include the \\\"entire data\\\" baseline in more of the figures/plots (especially Figure 2)?\\n\\nWe included the \\u201centire data\\u201d baseline in all figures (except Figure 3, where it is not applicable).\\n\\n>>> The models and datasets covered in the experiments are adequate to demonstrate that the presented technique is worth exploring, but probably not for someone considering applying it in the context of a deployed federated learning application.\\n\\nWe agree that the scale of our current experiments is lagging behind the size of the real world federated learning applications. However, as federated learning is a relatively new problem in the literature, we believe it is also an issue in the majority of the prior results in this area (e.g. several papers studied federated learning simulated with CIFAR-10 as we did, but we are not aware of any paper with ImageNet experiments). We note that there in an optimism that FedMA will benefit from larger datasets: our \\\"Data efficieny\\\" experiment (Figure 5) shows that FedMA utilizes additional data more efficiently in comparison to other federated learning approaches. Further, the \\\"Effect of local training epochs\\\" (Figure 3) experiment shows that FedMA is the only method truly benefiting from well-trained local models, which might be important as we move onto larger datasets.\\n\\nFor the future work, we are exploring potential large scale datasets representative of the practical federated learning and planning to consider federated learning experiments simulated from ImageNet.\\n\\n>>> Is there some kind of equivalent of the \\\"entire data\\\" baseline that would represent e.g. the best known technique for taking into account skewed domains outside the federated context?\\n\\nWe agree that this is a valuable and natural question. We added three additional baselines to Figure 4 in the updated manuscript and updated the corresponding \\\"Handling data bias\\\" paragraph in the draft.\\n\\ni) Vanilla VGG training over CIFAR-10. For this baseline (No Bias), we simply conduct normal model training over the entire CIFAR-10 dataset without any grayscaling. This baseline is not a realistic solution to the data bias and is simply added for the reference.\\n\\nii) One way to alleviate data bias is to selectively collect more data to debias the dataset. In the context of our experiment, this means getting more colored images for grayscale dominated classes and more grayscale images for color dominated classes. To simulate this scenario we simply do a full data training where each class in both train and test images has equal amount of grayscale and color images. This procedure, Color Balanced, performs well, but selective collection of new data in practice may be expensive or even not possible.\\n\\niii) Instead of collecting new data, one may consider oversampling from the available data to debias. In Oversampling, we sample the underrepresented domain (via sampling with replacement) to make the proportion of color and grayscale images to be equal for each class (oversampled images are also passed through the data augmentation pipeline, e.g. random flipping and cropping, to further enforce the data diversity). Such procedure may be prone to overfitting the oversampled images and we see that this approach only provides marginal improvement of the model accuracy compared to centralized training over the skewed dataset and performs noticeably worse than FedMA.\\n\\nTo conclude, we agree that learning with skewed data is an exciting research direction. Our preliminary experiment indicates that FedMA has the potential to mitigate the data skewness, but further work is needed to obtain solutions as good as the one corresponding to the balanced data training (and without expensive additional data collection).\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for the thoughtful comments. We have followed the reviewers suggestions to revise and improve our manuscript, while providing extra experiments as requested. One notable addition is the extended paragraph \\\"Solving matched averaging\\\" in Section 2.1 describing a general recipe for performing the matched averaging of neural nets with adaptive size of the global neural net. BBP-MAP then can be seen as a specific way of carrying out the optimization procedure we described. We answer each reviewer\\u2019s questions individually.\"}",
"{\"title\": \"rude review\", \"comment\": \"I have never seen such a review. This is rude, How can you determine the quality of this paper based on \\\"important area\\\".\"}",
"{\"title\": \"More details\", \"comment\": \"Hi, could you please provide more details for the review?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper offers a beautiful and simple method for federated learning. Strong empirical results.\\n\\nImportant area. \\n .\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Edit: Thanks for the thorough and responsive rebuttal! I'm particularly happy to see the additional background on BBP-MAP and the baselines you've added for handling data bias. You've comprehensively addressed my questions and I think this paper should be accepted.\", \"original_review\": \"The authors extend the recently proposed Probabilistic Federated Neural Matching (PFNM) algorithm of Yurochkin et al. ( 2019) to more kinds of neural networks, show that it isn't as effective for larger models as it is for LeNet-sized ones, and propose enhancements that lead to a state-of-the-art approach they call FedMA. I'm convinced that this represents a meaningful advance in federated learning, although the paper could use some tightening up, and the experiments are somewhat limited.\", \"some_feedback\": [\"I'd like to see a little bit more description of BBP-MAP, as even though it's not one of the components of the algorithm you directly modify it's still the underlying mathematical primitive. How far is it from having the same effect that the \\\"best possible\\\" permutation would? How is it able to allow the number of neurons in the federated model to grow relative to the size of the client models?\", \"Can you include the \\\"entire data\\\" baseline in more of the figures/plots (especially Figure 2)?\", \"The models and datasets covered in the experiments are adequate to demonstrate that the presented technique is worth exploring, but probably not for someone considering applying it in the context of a deployed federated learning application. Since federated learning is a problem domain motivated more by applied concerns (privacy, edge vs. cloud compute, on-device ML) than other areas of machine learning theory, it would be particularly valuable to see experiments at larger scale (in particular, on larger or more realistic datasets).\", \"The section that demonstrates how your model addresses skewed data domains is fascinating! That's one area in which your experiments are directly relevant to federated learning in practice, and it's a rapidly growing area of research in itself (e.g. in its relationship to causal learning that Leon Bottou has recently been exploring). Exploring this further could make for a whole separate paper. In the mean time, though, is there some kind of equivalent of the \\\"entire data\\\" baseline that would represent e.g. the best known technique for taking into account skewed domains outside the federated context?\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Post Rebuttal Summary\\n---------------------------------\\nI have nudged my score up to an \\\"Accept\\\", based on my comments to the rebuttal below. I hope the authors continue to improve the readability of Sec. 2.1\\n\\nReview Summary\\n--------------\\nOverall I think this is almost above the bar to be accepted, and I could be persuaded with a strong rebuttal. The strengths here are the extensive experiments and the easy-to-implement method. The primary weakness of this paper is that it is a \\\"straightfoward\\\" way to extend the BBP-MAP method to CNNs and RNNs, so the methodological novelty is weak relative to the BBP-MAP past work (Yurochkin et al. ICML 2019). Other technical weaknesses limit the ability to use this method on clients with diverse class distributions, which will be common in real deployments.\\n\\nPaper Summary\\n-------------\\nThis paper addresses the problem of federated learning, where J separate \\\"clients\\\" with disjoint datasets each train a neural network model for a supervised problem, and then try to aggregate all J individual client models into one \\\"global model\\\" in a coherent way. The natural problem is that due to hidden units being permutable within one network, naively taking parameter averages across two client models will lead to bad accuracy without first coming up with a consistent ordering of the units in each layer. \\n\\nPrevious work (Yurochkin et al. ICML 2019) has developed a Bayesian nonparametric model based on the Beta-Bernoulli Process (BBP) for the case of federated learning of multi-layer perceptrons. However, the extension to convolutional layers or recurrent layers has yet to be solved, which is the focus of this paper. \\n\\nThis paper's algorithm (Federated Matched Averaging (FedMA), see Alg 1), proceeds by iteratively stepping thru the CNN or RNN layer by layer greedily from input to output. At each layer, we first solve a BBP-MAP optimization (bipartite matching using a BBP maximum a-posteriori objective as cost function, a subprocedure taken direclty from Yurochkin et al.). This obtains a consistent low-cost permutation for each client model. Then, the global model weights for that layer is the average of the aligned client weights. After the current layer update, each client keeps training, keeping all layers up to the current frozen but revising later layers. This layer-by-layer training can be applied to both CNNs and RNNs.\\n\\nThe proposed approach is compared to FedAvg and FedProx on MNIST and CIFAR image classification tasks with CNNs, and Shakespeare text classification tasks with RNNs. Later experiments explore the effect of communication efficiency (MB transfered between client and master), effect of local training epochs, handling biased class distributions, and interpretabilty.\\n\\n\\n\\n\\nNovelty & Significance\\n-----------------------\\nSolving federated learning problems is of increasing practical importance, and certainly trying to do so for CNNs and RNNs (more than just large MLPs) is important. So I like where the paper is going.\\n\\nAlthough the method is \\\"new\\\", it is more or less a straightforward extension of work by Yurochkin et al. (ICML 2019) to CNNs and RNNs. If you read the last few sentences of Yurochkin et al., you'll see \\\"Finally, it is of interest to extend our model-ing framework to other architectures such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The permutation invariance necessitating matching inference also arises in CNNs since any permutation of the filters results in the same output, however additional bookkeeping is needed due to the pooling operations.\\\" I view this paper as a well-executed implementation of this \\\"bookkeeping\\\". Certainly not trivial, but to some readers perhaps not clearly \\\"above the bar\\\" for a top conference like ICLR.\\n\\n\\nTechnical Concerns\\n------------------\\n\\n## Concern 1: Client models will not always be alignable after permutation\\n\\nMy first concern is that there will not always be a one-to-one permutation of the neurons learned by two client models with different class distributions. Given fixed capacity at each layer, some clients may learn a filter for \\\"horse hooves\\\" (esp. if horse images are common to that client), while other clients may learn a filter for \\\"snake skin\\\" (if snakes are more common to that client). I wonder if we can quantify how well the aligned filters match in practice, and if there is any benefit to revising the alignment to allow some client-specific customizations (e.g. by having the global model can learn more units than the client model). \\n\\n## Concern 2: Use of the BBP-MAP subprocedure poorly motivated\\n\\nThe paper prioritizes a clean and easy-to-implement algorithm to resolve practical alignment issues between client CNN and RNN models. However, I was a bit underwhelmed that the BBP-MAP solution used by Yurochkin et al. was treated as a black-box subprocedure without much justification. I could see 2 preferable alternatives to the current use of BBP-MAP. Either a simpler approach using Eq. 2 with a squared error cost and the Munkres algorithm to solve bipartitite matching to obtain the permutation (which seems more in spirit of the rest of the paper). Or, a more sophisticated probabilistic approach (taking a Bayesian hierarchical model from Yurochkin et al. seriously and forming the estimated global weights from a weighted sums that includes both the clients (weighted by dataset size) and the assumed prior). As it is, I feel the BBP-MAP subprocedure in the current Algorithm 1 is poorly motivated for the task at hand.\\n\\n\\n\\nExperimental Evaluation\\n-----------------------\\n\\nOverall the experiments were extensive and demonstrated several apparent advantages (reduced need to transfer large memory during communication, etc.). \\n\\n\\nMinor Presentation Concerns\\n---------------------\\nBefore Eq. 2, you should introduce the \\\"\\\\theta\\\" notation\\n\\nI'm a bit confused about how \\\"FedMA\\\" differs from \\\"FedMA with communication\\\", even after reading Sec. 2.3. How exactly are communicate costs kept down? What are you sending from master to client at beginning of every \\\"round\\\" if not the full global model (all weights of the CNN)?\"}"
]
} |
BylD9eSYPS | Clustered Reinforcement Learning | [
"Xiao Ma",
"Shen-Yi Zhao",
"Zhao-Heng Yin",
"Wu-Jun Li"
] | Exploration strategy design is one of the challenging problems in reinforcement learning~(RL), especially when the environment contains a large state space or sparse rewards. During exploration, the agent tries to discover novel areas or high reward~(quality) areas. In most existing methods, the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent. To tackle this problem, we propose a novel RL framework, called \underline{c}lustered \underline{r}einforcement \underline{l}earning~(CRL), for efficient exploration in RL. CRL adopts clustering to divide the collected states into several clusters, based on which a bonus reward reflecting both novelty and quality in the neighboring area~(cluster) of the current state is given to the agent. Experiments on several continuous control tasks and several Atari-2600 games show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases. | [
"reinforcement",
"agent",
"quality",
"exploration",
"methods",
"novelty",
"current state",
"crl",
"exploration strategy design"
] | Reject | https://openreview.net/pdf?id=BylD9eSYPS | https://openreview.net/forum?id=BylD9eSYPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QwmqusbVHJ",
"SklciHXTKB",
"ryl70K8hYB",
"rkgyvJ3_uS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749888,
1571792290125,
1571740107042,
1570451286964
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2475/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2475/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2475/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper discusses a simple but apparently effective clustering technique to improve exploration. There are no theoretical results, hence the reader relies fully on the experiments to evaluate the method. Unfortunately, an in-dept analysis of the results is missing making it hard to properly evaluate the strength and weaknesses. Furthermore, the authors have not provided any rebuttal to the reviewers' concerns.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed a clustering based algorithm to improve the exploration performance in reinforcement learning. Similar to the count based approaches, the novelty of a new state was computed based on the statistics of the corresponding clusters. This exploration bonus was then combined with the TRPO algorithm to obtain the policy. The experimental results showed some improvement, compare with its competitors.\\n\\nAlthough the proposed method is somewhat similar to the earlier hash based approaches, I think it is still interesting by using the clustering, instead of computing the hash code with neural networks. On the other hand, the motivation and explanation of this method are not well presented. I also have some concern regarding the fairness of the comparison in experiments. The English usage could be improved as well. My detailed comments and questions are as follows.\\n1. The new proposal for the exploration bonus is provided in Equation (3). The denominator there is essentially the count, which is consistent with previous count based approaches (though not with the square root). For the numerator, I am a bit confused about the choice, as if \\\"N\\\" is small, the accumulated \\\"R\\\" could be small as well, which may offset the bonus based on count. I also didn't understand the author's claim that \\\"...it is highly possible that all states in cluster \\\\phi (s) have zero reward\\\", just below Equation (3). Unless the authors provide more details, I am not convinced that the enumerator could be a good choice.\\n2. Given the proposed bonus, I am wondering how sensitive could it be to the choice of hyperparameters, especially w.r.t \\\\eta. The authors may need to provide more ablation studies on their effect.\\n3. Another concern is regarding the scalability of the proposed method. Algorithm 1 implies that k-means needs to be conducted in every iteration, which could be very slow. So how about the running time of the proposed method, when compared with baselines?\\n4. In the experiments, the authors claimed that the code for TRPO-Hash is provided by its authors. However, the scores for TRPO-Hash were much worse than the numbers in the TRPO-Hash paper (see their Table 1). Do you have any explanation?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed a new reinforcement learning framework named clustered reinforcement learning. The proposed method employs the clustering method to explore the novelty and quality in the neighboring area. Some suggestions are as below:\\n\\n1. The method adopts the k-means for clustering. How about other clustering methods, like Spectral Clustering and other recent deep clustering methods? It's expected to give the experiment comparison results with other clustering methods. Is the method senstive to the used clustering method. \\n2. Usually in the clustering tasks, how to decide the cluster number K is a crucial problem for many applications. And for this method, the clustering is employed in the exploration stage to cluster neighboring areas, how to determine the value of K. Are there any specifical information could be used? Moreover, could the clustering part be jointly trained in the framework? It' required to investigate the influence of K theoretically and experimentally.\\n3. As the author claimed, the novelty and quality are both important for exploration in RL, the key question is thus how to balance and utilize them carefully in the exploration stage. The contribution of this paper is about the state clustering for exploration in RL which can further reflect the novelty and quality. How to utilize and balance the novelty and quality is still unsolved in this paper. \\n\\nIn summary, I ack that the idea is effective but seems straightforward. It would be better to present some theoretical analysis.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a clear approach to improve the exploration strategy in reinforcement learning, which is named clustered reinforcement learning. The approach tries to push the agent to explore more states with high novelty and quality. It is done by adding a bonus reward shown in Eq. (3) to the reward function. The author first cluster states into clusters using the k-means algorithm. The bonus reward will return a high value for a state if the corresponding cluster has a high average reward. When the total reward in a cluster is smaller than a certain threshold, the bonus reward will consider the number of states explored. In the experiments, the authors test different models on two MuJoCo tasks and five Atari games. TRPO, TRPO-Hash, VIME are selected as baselines to compare with. Results show that the proposed bonus reward reaches faster convergence and the highest return in both MuJoCo tasks. In those five Atari games, the proposed method achieves the highest or second-highest average returns.\\n\\nAlthough the paper is generally easy to follow and the motivations of the equations are clear, the analysis of the results is missing and thus the paper provides very limited insights on the behavior of the proposed method. As a result, my opinion on this paper leans to a rejection. As ICLR recommends paper length to be 8 pages, the authors can and should use the remaining space to give more details. For example, the authors can show the mean reward and number of states in all clusters to see whether the agent is efficiently exploring different clusters. And as the number of clusters K is an important hyper-parameter, the reader will also be curious about the resultant performance of the method with different K.\", \"some_questions\": \"1) In Eq. (3), you have two hyper-parameters in the bonus reward, and for MuJoCo and Atari games, you are using different settings for the first coefficient. How do you choose the hyper-parameter settings? Do you perform grid search with another environment or a set of environments to determine the hyper-parameters?\\n\\n3) In the algorithm, the method has to learn new cluster assignments in each iteration. Does it significantly slow down the training of the agent? \\n\\n4) In the experiments, the authors compare with other methods on only five Atari games. However, there should be more environments available. What is the reason for choosing these five games? Unless it is difficult to gather the scores on more environments, I believe the authors shall provide results with more Atari environments.\\n\\n5) The conclusion (Section 6) is extremely short and it claims that \\\"CRL can outperform other SOTA methods in most cases\\\". However, for the five Atari games, the only game that CRL achieves the best performance among seven methods is Venture, according to Table 1. \\nSuch a claim will confuse readers as it is not in line with the results.\\n\\nAll in all, I believe this paper can be significantly improved if more details and analyses are provided.\"}"
]
} |
BJxI5gHKDr | Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning | [
"Arsenii Ashukha",
"Alexander Lyzhov",
"Dmitry Molchanov",
"Dmitry Vetrov"
] | Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance. | [
"uncertainty",
"in-domain uncertainty",
"deep ensembles",
"ensemble learning",
"deep learning"
] | Accept (Poster) | https://openreview.net/pdf?id=BJxI5gHKDr | https://openreview.net/forum?id=BJxI5gHKDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AycpGQEQrjz",
"yZXKfKU2Bg",
"BkxCKUsFsH",
"Skli45uFsB",
"BJebZqOFjS",
"SJlB-P_tjB",
"rkglRIOFsS",
"HkgUlEdFsB",
"S1lvoIaRtH",
"rJg0iIgAFH",
"rkgh0fP3FS",
"ryxCquU9OH"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1588078372838,
1576798749860,
1573660294429,
1573648946551,
1573648889123,
1573648124894,
1573648071614,
1573647341611,
1571899038953,
1571845798475,
1571742420344,
1570560149675
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2473/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2473/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2473/AnonReviewer2"
],
[
"~Yukun_Ding1"
]
],
"structured_content_str": [
"{\"title\": \"Additional links\", \"comment\": \"Additional links:\\n\\n- Blog: https://senya-ashukha.github.io/pitfalls-uncertainty&ensembling\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper points out pitfalls of existing metrics for in-domain uncertainty quantification, and also studies different strategies for ensembling techniques.\\n\\nThe authors also satisfactorily addressed the reviewers' questions during the rebuttal phase. In the end, all the reviewers agreed that this is a valuable contribution and paper deserves to be accepted. \\n\\nNice work!\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to \\\"A related work\\\"\", \"comment\": \"Thank you. This work appears relevant and we will cite it in the next revision of our paper.\"}",
"{\"title\": \"Response to Review #2 (part 2/2)\", \"comment\": \"> 16.\\tThe issues of uncalibrated log-likelihood and TACE are clearly shown in the paper, whereas the issues with misclassification detection are only verbally discussed. An illustrative example, at least a toy thought example, could really improve the paper here\\n\\nAUROC/AUPR for misclassification detection plainly provides numbers that can not be compared across different models. We will try to come up with a convincing illustrative example, but it is not yet clear for us how to make it more convincing than the verbal discussion.\\n\\n> 17.\\tThe chosen main performance metric is not very convincingly motivated. It is clear why it is based on calibrated log-likelihood, but it is not very convincing why one cannot just used calibrated log-likelihood as a performance metric, why one should base the metric on deep ensembles instead. Also from the long-term perspective, if the community comes up with methods clearly outperforming deep ensembles, the metric would need to be based on one of these new methods \\n\\nDEE is basically a more convenient way to visualize the calibrated log-likelihood. The calibrated log-likelihood does indeed seem to be a great absolute measure of performance. However, it is not very convenient if one wants to compare the performance of different ensembling techniques. Different models and datasets have different base values of calibrated log-likelihood, and its dependence on the number of samples is non-trivial. DEE is model- and dataset-agnostic and provides some useful insights that can be difficult to visualize using the calibrated log-likelihood alone.\\n\\nIn the long run, the DEE metric might still be useful and insightful. While the DEE curve of deep ensembles is an identity function, superior methods would typically result in a higher DEE curve. We would only run into problems if this new superior method would outperform extremely large deep ensembles (say hundreds or thousands of samples) using just a handful of samples. However, we do not expect such a drastic gap to appear soon.\\n\\n> 18.\\tThere is an indirect uncertainty metric that is not mentioned in the paper \\u2013 uncertainty used in active learning (see, e.g., Hern\\u00e1ndez-Lobato and Adams, 2015. Probabilistic backpropagation for scalable learning of Bayesian neural networks)\\n\\nWe do mention active learning in the related works section, but this reference is indeed very relevant here. Thank you.\\n\\n> 20.\\t\\u201cthe original PyTorch implementations of SWA\\u201d \\u2013 SWA is not considered in the paper\\n\\nWhile we do not use SWA in our experiments, our codebase is heavily based on the original implementation of SWA since it allowed to easily reproduce the training of different models and was easy to modify for our needs. We will articulate the reference more clearly in the next revision of the paper.\"}",
"{\"title\": \"Response to Review #2 (part 1/2)\", \"comment\": \"We would like to sincerely thank you for your thoughtful and thorough remarks. They will allow us to significantly improve the quality of our paper in its next revision.\", \"we_address_the_major_questions_below\": \"> 7.\\t\\u201cIn that case, both objects and targets of induced binary classification problems remain fixed for all models\\u201d \\u2013 do the authors consider in this case all out-of-domain objects as having a positive class and all in-domain objects as having a negative class? Because the models are still going to make individual misclassification mistakes\\n\\nYes, in the case of out-of-domain detection the out-of-domain objects have the positive class and the in-domain objects have the negative class. Because of that, both the objects and the labels of the auxiliary binary classification problems of out-of-domain detection remain the same across different models.\\n\\n> 9.\\tIn eq. (4) and (5) subscript DE is not defined\\n\\nWe define CLL_m where m stands for the name of the ensembling technique. DE stands for deep ensemble. We will clarify this in the future revision of the paper.\\n\\n> 11.\\t\\u2026 Similar to cSGLD, is seems that SWAG was not applied on ImageNet. Why is that if that is the case? And it should be clearly stated at least in experimental setup in Supplementary. \\u2026\\n> 23.\\t\\u2026 Why was dropout applied only for limited number of architectures and not applied on ImageNet at all? ...\\n\\nDue to limited computational resources we have prioritised SSE over cSGLD and FGE over SWAG since they are closely related to each other and achieve similar performance on CIFAR10/100 datasets. Since these techniques are quite similar to each other, we expect these results to translate well to ImageNet. Also, we have only applied dropout to the VGG model and the WideResNet model since plain ResNets are conventionally trained without dropout, and naive application of dropout hurts the final predictive performance.\\n\\n> 13.\\tMissing details of what kind of augmentation is used in Section 4.3. Is it the same as training augmentation specified in Supplementary? It would require a reference to Supplementary\\n\\nFor test-time augmentation we use the same data augmentation as used during training. We will add the reference to Supplementary there.\\n\\n> 15.\\t\\u201cOur experiments demonstrate that ensembles may be severely miscalibrated by default while still providing superior predictive performance after calibration.\\u201d \\u2013 unclear which experiments demonstrate this and superior in comparison to what\\n\\nThe most drastic difference can be observed in Figure 1. Deep ensembles + augmentation (DE+aug) on ImageNet has exteremely poor calibration and perform worse than plain DE, whereas calibrated DE+aug outperforms plain DE (and other techniques as well).\"}",
"{\"title\": \"Response to Review #1 (part 2/2)\", \"comment\": \"> 4. In section 4.1, the hypothesis on #independent trained networks is great and it makes sense? How is this translating ito the evaluations? None of the results actually talk about this aspect directly? Or am I missing something here?\\n\\nThis point presumably refers to the following sentence in section 4.1: \\u201cWhat number of independently trained networks yields the same performance as a particular ensembling method?\\u201d This is not intended to be a presentation of hypothesis. This question only sets the stage for the introduction of the deep ensemble equivalent (DEE) metric which directly answers the question when evaluated. DEE plays a major role in our benchmark and we have quantitative results concerning it (e.g. Figure 3 in the main text). We discuss the results related to DEE in Section 4.2 and Section 5 and provide more detailed results in Appendix E.\\n\\nOn the other hand, if the comment refers to the following sentence \\\"Deep ensembles, ..., which can intuitively result in a better ensemble.\\\", it was just a motivation to consider deep ensembles as a potentially strong baseline.\\n\\n> 5. Setting the evaluations with DEE as reference is problematic because we already know from random sampling theory that deep ensemble is better than the normal ensembles (tech results on random sampling for model fitting and RL etc. optimization results on mode finding with single mode vs. multi model methods also say similar things) and in fact that was the main motivation. Also normal regularization (like dropout or K-facL are more towards overfitting than ensembling) are not really an ensemble. Putting these together, most of the conclusions and the lots (figure 3 in particular) is by definition true. Nothing surprising.\\n\\nDid you mean DE (deep ensembles) instead of the DEE score here? The superior performance of deep ensembles is indeed not surprising. Highlighting this fact is not the main purpose of this paper. Instead, our study is largely aimed at comparing ensembling methods in a fair and interpretable way to gain insights in the fields of ensembling and uncertainty estimation.\\n\\nMethods that are based on stochastic computation graphs, e.g., MC-dropout, K-FAC Laplace and variational inference, are commonly regarded as ensembling techniques in the bayesian deep learning literature and are frequently used as a baseline in ensembling-based uncertainty estimation research (Gal and Ghahramani 2016, Lakshminarayanan et al 2017, Louizos and Welling 2017, Maddox et al 2019). Deep ensembles, in contrast to dropout, are rarely considered as a baseline in the bayesian deep learning literature, but we believe that overcoming the DE baseline is a strong challenge for the community.\\n\\nWe would highly appreciate it if you could provide links for the papers mentioned in your review since they seem to be relevant to our study.\\n\\nLouizos C, Welling M. Multiplicative normalizing flows for variational bayesian neural networks, ICML 2017. \\nLakshminarayanan B, Pritzel A, Blundell C. Simple and scalable predictive uncertainty estimation using deep ensembles, NeurIPS 2017.\\nMaddox W, Garipov T, Izmailov P, Vetrov D, Wilson AG. A simple baseline for bayesian uncertainty in deep learning, NeurIPS 2019.\\nGal Y, Ghahramani Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning, ICML 2016.\"}",
"{\"title\": \"Response to Review #1 (part 1/2)\", \"comment\": \"We would like to sincerely thank you for your thoughtful remarks and questions.\", \"we_address_your_concerns_below\": \"> 1. Much of the evaluations rely on the choice/nature of the optimal temperature, which would be different for different models. The authors suggest to use the model-specific optimal values when comparing instead of fixing the temperature? Why is this the case? Further, if we take this into account (i.e., allow for comparing different temperatures) then much of the differences between DEE and others cannot be directly interpreted. This is the case when using log-likelihood and Brier scores. \\n\\nMost of the modern deep learning techniques are overconfident in their predictions, i.e. the effective temperature is lower than optimal. Moreover, it does not seem possible for now to determine the optimal temperature relying only on the training data. Validation-based temperature scaling is a simple yet powerful calibration technique that allows to improve many predictive performance metrics, e.g. the test log-likelihood, post-hoc for all methods and models. We would like to reduce the influence of the non-optimal temperature on the predictive performance, so we see the temperature scaling as an essential step in training the model. As we show in Figure 1, comparing different techniques without temperature scaling can yield misleading results. E.g., deep ensembles with test-time data augmentation (DE+augment) seem to perform worse than deep ensembles without data augmentation (DE) in terms of the log-likelihood, whereas after temperature scaling DE+augment outperforms plain DE.\\n\\nComparing methods at different temperatures is fair since the procedure for temperature scaling is the same for all methods, as described in Section 3.5. Moreover, we stress that this setting is more reasonable compared to using the same temperature for all methods in the benchmark since different methods have different optimal temperatures on hold-out data. \\n\\n> 2. AUC can be transformed into a normalized probability distribution (CDF), and hence in principle it is model/hyperparameters agnostic. This is one of the reasons i is used as information criterion in Bayesian model selection. Area of AUC is a valid metric as well. To that end, why do the authors suggest that it cannot be used as criteria for comparison across models? \\n\\nMetrics like AUROC / AUPR cannot be used for a particular problem of misclassification detection. Let us summarize the argument. We aim to compare different models---trained on the same data---in terms of ability to distinguish between correct and wrong classifications. The prior literature suggests the following: i) every prediction of a particular model receives a confidence score, ii) the score then is treated as an output of a binary classifier that detects misclassifications. AUROC / AUPR of these binary classifiers is used for comparison between different models.\\n\\nSuch a comparison, however, is not correct. Every model has its own correct and wrong predictions, and thus poses its own misclassification detection problem (binary classification of correctly classified vs. incorrectly classified examples). Particularly, in the case of comparison between K models, we have K different misclassification detection datasets comprised of pairs (original object, \\u201ccorrectly classified\\u201d / \\u201cincorrectly classified\\u201d binary label) with a different labeling for each model. The described comparison procedure essentially corresponds to a comparison of performance of classifiers that solve different classification problems. Such metrics are incomparable.\\n\\n> 3. From section 3.5 it is not clear how test time cross validation is tackling temperature scaling?\\n\\nThe \\u201ctest-time cross-validation\\u201d method for evaluating metrics at an optimal temperature is organized as follows: \\n1. A test set is randomly shuffled and divided into K folds of the same size\\n2. A temperature T* is adjusted by K-1 folds, T* = argmax_T LL(Model(T), Data(K-1 folds))\\n3. The model at the optimal temperature T* is used to evaluate metrics on the Kth fold.\\n4. The steps 1-3 are repeated several times, the metric values are averaged.\\nIn our experiments we use K=2.\\n\\nIn the step 2 we solve a 1D optimization problem that optimizes LL on K-1 folds w.r.t. a scalar temperature T. The result of step 2 may differ depending on the particular data split. Strictly speaking, the described algorithm estimates expectation of test metrics w.r.t. the distribution of optimal temperatures induced by the random data splits. In practice, we noticed that the optimal temperatures did not differ much on different splits.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We would like to sincerely thank you for your thoughtful remarks and questions.\", \"we_address_your_concerns_below\": \"> On page 6, the authors mention that the prior in Eq. 3 is taken to be a Gaussian N(\\\\mu, diag(\\\\sigma^2)) for Bayesian neural networks, however, many other choices of a prior distribution are available in the literature. What is the impact of changing prior distributions on the quality of uncertainty estimates in the case of variational inference? \\n\\nThe prior distribution is a part of the underlying probabilistic model, whereas most ensembling techniques can be considered as approximate inference techniques under such a model. Aiming for a fair comparison, we set the probabilistic model (and, therefore, the prior distribution) to be the same across all ensembling techniques. We use the Gaussian prior, induced by the optimizer favored in recent literature since it is simple and provides reasonably high performance. However, it would indeed be interesting to see how the choice of the prior influences different ensembling techniques.\\n\\n> Data augmentation is commonly used for improving model performance. However, I find the results presented in Sect 4.3 are not clear enough, note that for a given ensembling method in Table 1, the negative calibrated log-likelihood may increase or decrease when using different networks (VGG, ResNet, etc.). I think it would be interesting to elaborate a bit more on the influence of model complexity.\\n\\nThe effect appears due to std of the negative calibrated log-likelihood (nCLL). In all the cases where nCLL may increases or decreases within one method the difference has the order of ~1e-3 or less and lies within std interval. We will correct the tables and add stds.\\n\\nOn CIFAR datasets test-time data augmentation helps \\u201cweak\\u201d ensembling methods like dropout, K-FAC Laplace and variational inference, whereas on ImageNet we observe the improvement for all techniques. We hypothesize that this is caused by a more diverse data augmentation on ImageNet as compared to CIFAR. We will move the ImageNet results (Table 9) into the main section of the paper as they seem to be more representative than CIFAR results. It would be interesting to see whether the use of more diverse data augmentation (e.g. rotations, color transformation, etc.) improves stronger ensembles as well.\\n\\n> On page 15, in Eq. 12, the choice of the variance parameter \\\\sigma_p^2=1/(N*wd) seems unclear and should be better explained.\\n\\nIt is the same prior as defined in eq. 9 on page 14. The weight decay parameter wd is the coefficient before the L2 regularizer in the objective. In most deep learning frameworks, one computes the *average* loss in the minibatch instead of the *sum* across all objects. Therefore, one needs to rescale this coefficient by the size of the training set to obtain the underlying prior distribution. We will update the paper with a more clear explanation.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper mainly concerns the quality of in-domain uncertainty for image classification. After exploring common standards for uncertainty quantification, the authors point out pitfalls of existing metrics by investigating different ensembling techniques and introduce a novel metric called deep ensemble equivalent (DEE) that essentially measures the number of independent models in an ensemble of DNNs. Based on the DEE score, a detailed evaluation of modern DNN ensembles is performed on CIFAR-10/100 and ImageNet datasets.\", \"strengths\": \"The paper is well written and easy to follow. The relationship to previous works is also well described. Overall, I think this is a good paper, which gives a detailed overview of existing metrics for accessing the quality in in-domain uncertainty estimation. The idea behind the proposed DEE score is nice and simple, clearly showing the quality of different ensembling methods (in Fig. 3). Given the importance of uncertainty analysis to deep learning, I believe this work will have a positive impact on the community.\", \"weaknesses\": [\"On page 6, the authors mention that the prior in Eq. 3 is taken to be a Gaussian N(\\\\mu, diag(\\\\sigma^2)) for Bayesian neural networks, however, many other choices of a prior distribution are available in the literature. What is the impact of changing prior distributions on the quality of uncertainty estimates in the case of variational inference?\", \"Data augmentation is commonly used for improving model performance. However, I find the results presented in Sect 4.3 are not clear enough, note that for a given ensembling method in Table 1, the negative calibrated log-likelihood may increase or decrease when using different networks (VGG, ResNet, etc.). I think it would be interesting to elaborate a bit more on the influence of model complexity.\", \"On page 15, in Eq. 12, the choice of the variance parameter \\\\sigma_p^2=1/(N*wd) seems unclear and should be better explained.\"], \"minor_comments\": \"The size of some figures appears too small, for example Fig. 4 and Fig. 5, which may hinder readability.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors response on 13th nov regarding the main concerns I have are valid. They make sense. I thank the authors for the detailed explanations. I went back and did another thorough read of the work. As it stands, I am OK to change my review to weak accept.\\n\\n---------------------------------\\n\\nThe authors evaluate a variety of ensemble models in terms of their ability to capture in-domain uncertainity. A set of metrics are used to perform these evaluations and in turn, using deep ensenble as a reference, the authors study the behaviour/capacity of the rest of the methods. \\n\\nAlthough the motivation for the work is sensible, there are several critical issues with the paper and the summary is not necessarily conclusive in terms of gaining any new insights. \\n1. Much of the evaluations rely on the choice/nature of the optimal temperature, which would be different for different models. The authors suggest to use the model-specific optimal values when comparing instead of fixing the temperature? Why is this the case? Further, if we take this into account (i.e., allow for comparing different temperatures) then much of the differences between DEE and others cannot be directly interpreted. This is the case when using log-likelihood and Brier scores. \\n2. AUC can be transformed into a normalized probability distribution (CDF), and hence in principle it is model/hyperparameters agnostic. This is one of the reasons i is used as information criterion in Bayesian model selection. Area of AUC is a valid metric as well. To that end, why do the authors suggest that it cannot be used as criteria for comparison across models? \\n3. From section 3.5 it is not clear how test time cross validation is tackling temperature scaling? \\n4. In section 4.1, the hypothesis on #independent trained networks is great and it makes sense? How is this translating ito the evaluations? None of the results actually talk about this aspect directly? Or am I missing something here?\\n5. Setting the evaluations with DEE as reference is problematic because we already know from random sampling theory that deep ensemble is better than the normal ensembles (tech results on random sampling for model fitting and RL etc. optimization results on mode finding with single mode vs. multi model methods also say similar things) and in fact that was the main motivation. Also normal regularization (like dropout or K-facL are more towards overfitting than ensembling) are not really an ensemble. Putting these together, most of the conclusions and the lots (figure 3 in particular) is by definition true. Nothing surprising.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper provide an extensive review of current advances in uncertainty estimation in neural networks with the analysis of drawbacks of currently used uncertainty metrics and comparison on scale the recent method to estimate uncertainty. The paper covers a lot of uncertainty metrics and a wide range of methods. The paper focuses on in-domain uncertainty estimation complementing the recent similar review on out-of-domain uncertainty estimation.\\n\\nIt seems that the paper provides the analysis missing in the current literature. Whereas as mentioned Yukun Ding in a public comment there is a related work on identifying issues with popular uncertainty metrics, the mentioned paper is missing the through comparison of the methods for estimating uncertainty. \\n\\nSuch kind of thorough analysis (especially performed on scale on large datasets) and comparison is of obvious interest to the community as well as objective comparison of the current state-of-the-art. \\n\\nThe paper is clearly written and easy to follow.\\n\\nBased on this, I believe this is a strong technical paper and it should be accepted. However, the analysis in the paper is not overwhelmingly exhaustive. Some of the arguments on that are listed below.\\n\\nBelow is the list of comments/thoughts for potential improvement of the paper:\\n1.\\t\\u201cIn this case, a model is expected to provide correct probability estimates:\\u201d \\u2013 may be not the best choice of words, because for out of domain uncertainty estimation we still expect a model to provide correct probability estimates\\n2.\\tThe first paragraph on page 3 seems to better fit in Section 2, for example, on the very beginning of Section 2.\\n3.\\t\\u201cComparison of the log-likelihood should only be performed at the optimal temperature.\\u201d and others alike \\u2013 personally, I do not support this kind of formatting for a scientific paper\\n4.\\t\\u201ccan produce an arbitrary ranking of different methods. <\\u2026> Empirically,\\u201d \\u2013 in the current form it seems that the first statement is somehow theoretically justified and then additionally it is confirmed empirically in this paper. I believe that the authors use empirical observation itself as the justification of the first statement, if that the case it should be reworded here. For example, \\u201ccan produce an arbitrary ranking of different methods as we show below/ as we show empirically. We demonstrate that the overall \\u2026\\u201d If my belief is incorrect and there are other grounds that justify the first statement that it is required a reference after this statement.\\n5.\\tItalic and non-italic LL usage is unclear\\n6.\\tHaving \\u201cBrier score\\u201d emphasised as a paragraph, it seems that there should be a paragraph log-likelihood as well\\n7.\\t\\u201cIn that case, both objects and targets of induced binary classification problems remain fixed for all models\\u201d \\u2013 do the authors consider in this case all out-of-domain objects as having a positive class and all in-domain objects as having a negative class? Because the models are still going to make individual misclassification mistakes\\n8.\\tFigure 2 \\u2013 legend occupies too much space of the plot occluding almost a third part of the plot. Maybe taking the legend out of the plot to the right and squeezing the plot to make a room for the legend would be a better solution\\n9.\\tIn eq. (4) and (5) subscript DE is not defined\\n10.\\t\\u201cSSE and cSGLD outperform all other techniques except deep ensembles\\u201d \\u2013cSGLD was not applied on ImageNet, therefore this statement is a bit misleading\\n11.\\tColour of SWAG in Figure 3 is not very clear. Only excluding other colours I can determine which line is SWAG. Similar to cSGLD, is seems that SWAG was not applied on ImageNet. Why is that if that is the case? And it should be clearly stated at least in experimental setup in Supplementary \\nFor colours in general, lines in legends are very thin and it is difficult to assess their colour. I appreciate the authors compare a lot of methods and therefore have to use a lot of colours, but it is quite difficult to assess them even on screen not to mention if the paper is printed out. Could the authors please use thicker lines in legends at least?\\n12.\\t\\u201cBeing more \\u201clocal\\u201d methods\\u201d \\u2013 without any context in the main paper this referral to \\u201clocal\\u201d methods is unclear. Also it is good to add a reference to Appendix review of the considered methods in the main text.\\n13.\\tMissing details of what kind of augmentation is used in Section 4.3. Is it the same as training augmentation specified in Supplementary? It would require a reference to Supplementary\\n14.\\t\\u201c(Figure 1, Table REF)\\u201d \\u2013 missing number for Table\\n15.\\t\\u201cOur experiments demonstrate that ensembles may be severely miscalibrated by default while still providing superior predictive performance after calibration.\\u201d \\u2013 unclear which experiments demonstrate this and superior in comparison to what\\n16.\\tThe issues of uncalibrated log-likelihood and TACE are clearly shown in the paper, whereas the issues with misclassification detection are only verbally discussed. An illustrative example, at least a toy thought example, could really improve the paper here\\n17.\\tThe chosen main performance metric is not very convincingly motivated. It is clear why it is based on calibrated log-likelihood, but it is not very convincing why one cannot just used calibrated log-likelihood as a performance metric, why one should base the metric on deep ensembles instead. Also from the long-term perspective, if the community comes up with methods clearly outperforming deep ensembles, the metric would need to be based on one of these new methods \\n18.\\tThere is an indirect uncertainty metric that is not mentioned in the paper \\u2013 uncertainty used in active learning (see, e.g., Hern\\u00e1ndez-Lobato and Adams, 2015. Probabilistic backpropagation for scalable learning of Bayesian neural networks)\\n19.\\tFigures 4 and 5 are too small\\n20.\\t\\u201cthe original PyTorch implementations of SWA\\u201d \\u2013 SWA is not considered in the paper\\n21.\\t\\u201chidden inside an optimizer \\u2026 The actual underlying optimization problem\\u201d \\u2013 it seems that the ICLR audience should be familiar with \\u201cactual optimization problems\\u201d rather than using blindly the optimizer. It is always good to explicitly write down an equation that is used in a paper, but this wording seems a bit off for ICLR \\n22.\\t\\u201c\\\\hat{p}(y^\\u2217_i = j | x_i, w) denotes the probability that a neural network with parameters w assigns to class j when evaluated on object x_i\\u201d \\u2013 it should be \\\\hat{p}(y_i = j | x_i, w), y^*_i is observed\\n23.\\t\\na.\\tWhy was dropout applied only for limited number of architectures and not applied on ImageNet at all?\\nb.\\tWhy wasn\\u2019t cSGLD applied on ImageNet\\n24.\\t\\u201cOn CIFAR-10/100 parameters from the original paper are reused\\u201d \\u2013 it is better to repeat the reference here\", \"minor\": \"1.\\tFont size in eq. (10) should be the same as the rest of the paper\\n2.\\t\\u201cOr models achived top-1 error of\\u201d: \\u201cOr\\u201d - ?, \\u201cachived\\u201d -> achieved \\n3.\\t\\u201cfor a 45 epoch form a per-trained model\\u201d: \\u201cform\\u201d -> \\u201cfrom\\u201d?\"}",
"{\"comment\": \"Thanks for the great work. There is a related work that has a similar finding on the pitfalls of existing metrics. You might want to cite it in related works. Thanks.\", \"https\": \"//arxiv.org/abs/1903.02050\", \"title\": \"A related work\"}"
]
} |
rJxHcgStwr | Handwritten Amharic Character Recognition System Using Convolutional Neural Networks | [
"Fetulhak Abdurahman"
] | Amharic language is an official language of the federal government of the Federal Democratic Republic of Ethiopia. Accordingly, there is a bulk of handwritten Amharic documents available in libraries, information centres, museums, and offices. Digitization of these documents enables to harness already available language technologies to local information needs and developments. Converting these documents will have a lot of advantages including (i) to preserve and transfer history of the country (ii) to save storage space (ii) proper handling of documents (iv) enhance retrieval of information through internet and other applications. Handwritten Amharic character recognition system becomes a challenging task due to inconsistency of a writer, variability in writing styles of different writers, relatively large number of characters of the script, high interclass similarity, structural complexity and degradation of documents due to different reasons. In order to recognize handwritten Amharic character a novel method based on deep neural networks is used which has recently shown exceptional performance in various pattern recognition and machine learning applications, but has not been endeavoured for Ethiopic script. The CNN model is trained and tested our database that contains 132,500 datasets of handwritten Amharic characters. Common machine learning methods usually apply a combination of feature extractor and trainable classifier. The use of CNN leads to significant improvements across different machine-learning classification algorithms. Our proposed CNN model is giving an accuracy of 91.83% on training data and 90.47% on validation data. | [
"Amharic",
"Handwritten",
"Character",
"Convolutional neural network",
"Recognition"
] | Reject | https://openreview.net/pdf?id=rJxHcgStwr | https://openreview.net/forum?id=rJxHcgStwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_ff90p8S7",
"BygFLIHo5S",
"H1e6iYj4qH",
"HkerBE8aYB",
"H1ep80BntB",
"r1gIAmkyKB"
],
"note_type": [
"decision",
"official_review",
"official_comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1576798749830,
1572718160916,
1572284836912,
1571804220912,
1571737173449,
1570857934296
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2471/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2471/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2471/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2471/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2471/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The submission proposes to use CNN for Amharic Character Recognition. The authors used a straight forward application of CNNs to go from images of Amharic characters to the corresponding character. There was no innovation on the CNN side. The main contribution of the work is the Amharic handwriting dataset and the experiments that were performed.\", \"the_reviewers_indicated_the_following_concerns\": \"1. There was no innovation to the method (a straight forward CNN is used) and is likely not of interest to the ICLR community\\n2. The dataset was divided into train/val split and does not contain a held-out test set. Thus it was impossible to determine the generalization of the model.\\n3. The paper is poorly written with the initial version having major formatting issues and missing references. The revised version has fixed some of the formatting issues. The paper still need to having more paragraph breaks to help with the readability of the paper (for instance, the introduction is still one big long paragraph). The terminology and writing can also be improved. For instance, in section 2.3, the authors write that \\\"500 dataset for each character were collected\\\". It would be clearer to say that \\\"500 images for each character were collected\\\".\\n\\nThe submission received low reviews overall (3 rejects), which was unchanged after the rebuttal. Due to the general consensus, there was limited discussion. There were also major formatting issues with the initial submission. The revised version was improved to have proper inclusion of Amharic characters in the text, missing figures, and references. However, even after the revision, the paper still had the above issues with methodology (as noted by R4) and is likely of low interest for the ICLR community. \\n\\nThe Amharic handwriting data and experiments using a CNN can be of interest to the different community and I would recommend the authors work on improving their paper based on reviewer comments and submit to different venue (such as a workshop focused on character recognition for different languages).\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper considers the problem of character recognition applied to handwritten Amharic text. The authors collect a dataset of handwritten Amharic characters and apply a CNN model for character recognition. The task is contextualized within an extensive description of related work both in Amharic and other languages. The dataset is novel and would be of interest to the character recognition community, and I would encourage the authors to present it in its own right along with technical details about its collection. Unfortunately, due to methodological issues (detailed below), I do not think that the machine learning results in this paper are ready for publication at ICLR. The machine learning techniques applied are of limited novelty, however, the dataset is certainly a novel contribution.\", \"methodological_issues\": \"The paper describes dividing the dataset in an 80-20 train-validation split. To the best of my understanding, there is no held out test set, and so we cannot know the performance of the model on unseen data. It appears that validation accuracy was taken into account when selecting hyperparameters, and so, validation accuracy also does not represent the model's performance on unseen data.\\n\\nI would recommend that the authors introduce a test set into their dataset split, or designate the validation part of the dataset as a test part use cross validation for hyperparameter tuning.\\n\\nThe authors present the train and validation numbers on their dataset, but it is difficult to know the impact of the result without comparing to a baseline of some kind. It is challenging to compare to prior work since, according to the authors, prior work on Amharic character recognition has been focused on printed text. However, a simple non-neural baseline would be illuminating.\", \"recommendations\": \"1. The authors should report results on an unseen test dataset, rather than train and validation sets.\\n2. The authors spend a lot of time motivating the use of deep learning in the introduction, and later on describing convolutional neural networks, ReLU, fully connected layers, etc. in great details. I believe that this space in the paper could be better utilized describing the things that are unique to the work presented.\\n3. It is really important to be able to see (a) the references and (b) the example characters, as it helps readers to situate the work and to understand the specific challenges addressed. I urge the authors to prioritize these technical details in submissions of future versions of this work.\\n4. Please consider separating out the description of related work into its own section. While it is useful to describe state of the art systems applied to related datasets, it is not necessary to go into great technical detail, especially if the methods applied are quite different from those attempted in this paper.\\n5. Based on the challenging nature of Amharic character recognition, it would be extremely helpful to see F1 numbers or a confusion matrix. While Figure 3 shows that different characters can be written similarly, it would be great to provide a quantitative measure of this phenomenon.\\n6. For a figure such as Figure 3, please provide an indication (e.g. the character number from 0-265, or a printed version of the character) of what the drawn character is supposed to look like, to help readers who cannot read Amharic script.\\n\\nQuestions about dataset\\n1. Please clarify what you mean by \\\"the data collected are of two types\\\". Does this refer to the train/validation split?\\n2. It would be great to know more about how many individuals were selected (and the choices made about their demographics) for writing the example characters, and whether there were any interesting variations observed in the dataset based on attributes highlighted in the paper (e.g. age range)\\n3. Please explain the meaning of 'Form A' and 'Form B' in Figure 2.\\n4. In the text, features such as a mark of palatalization are noted. Are these addressed in the data collection? Please give further details about how the dataset addresses the special features of the Amharic script.\\n5. At the top of page 6, it says that the data was labelled. Could you say more about the annotators (multiple?), and report inter annotator agreement? Or, did the writers produce each character based on a specific prompt, such that the labels are known to be correct?\\n6. Section 3 mentions data augmentation. Could you describe this in more detail?\\n7. Are the images in Figure 3 from your dataset? Please show some examples from the new dataset you have collected!\"}",
"{\"title\": \"References missing issue\", \"comment\": \"I think the latex format was mistakenly used from my side and you are correct the references are missed. The Amharic characters are also not displayed in the latex format given. I will correct the comments in my final version.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors collect and preprocess a large amount of handwritten Amharic characters and train a deep convolutional network to successfully perform character recognition on a subset of the data.\\n\\nI vote to reject this paper. The formatting of the paper needs work, and the work is not substantially novel.\\n\\nFor some reason, many sections of the paper were ill-formatted, perhaps due to using an insufficiently portable submission format. In addition, though the paper cited a few references throughout, the final reference list was not available. In addition, handwritten character recognition, by itself, is not a new field, and the authors did not contribute sufficiently to the underlying theory or mechanics. Characters with arguably the same level of complexity, such as Chinese characters, have been thoroughly explored. The application of existing technology to a different script does not, by itself, compel me to feel that this is an ICLR worthy paper.\\n\\nIn the future, the authors might wish to show how their particular architecture (or processes) might be better suited to their task than other architectures, or show how they have, in some way, improved the science of character recognition. That being said, I think it is quite notable the effort that went into creating the data set for Amharic character recognition. I would encourage the authors to distribute this data set so that others may join in the research efforts.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper try to use CNN to build recognizer for handwritten Amharic characters. The CNN they used is simple and standard. Apparently this paper no novelty at all. They just apply CNN to a new task. This kind of work is not qualified for ICLR at all.\\n\\nThere are also some problems in paper organization. They should split the introduction part into several paragraphs to improve reading experience. And it seems that they forget to add reference part.\\n\\nGiven the quality of the writing and content, I decide to reject this paper.\"}",
"{\"comment\": \"The paper seems to be missing the references, at least from my end. Did this get clipped somehow?\\n\\nIn addition, it seems like there are some character formatting issues. Particularly on page 4 and (perhaps?) page 2.\\n\\nIs there any possible resolution to these issues?\", \"title\": \"References missing?\"}"
]
} |
r1xH5xHYwH | Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter! | [
"Seoyoung Ahn",
"Gregory Zelinsky",
"Gary Lupyan"
] | We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input. | [
"category learning",
"visual representation",
"linguistic labels",
"human behavior prediction"
] | Reject | https://openreview.net/pdf?id=r1xH5xHYwH | https://openreview.net/forum?id=r1xH5xHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"K4WTQ9-MPr",
"BkxmYVKjiS",
"ByxdMQFijH",
"BkeI_zYssS",
"BJxYabYiiS",
"r1gZ4qdsjS",
"SJewdvuioS",
"Hkg40vTEqH",
"HyefSpuJcS",
"SyedkNO0KH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749801,
1573782651094,
1573782287618,
1573782126102,
1573781953056,
1573780008957,
1573779311276,
1572292556141,
1571945786317,
1571877856391
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2470/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2470/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2470/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper explores training CNNs with labels of differing granularity, and finds that the types of information learned by the method depends intimately on the structure of the labels provided.\\n\\nThought the reviewers found value in the paper, they felt there were some issues with clarity, and didn't think the analyses were as thorough as they could be. I thank the authors for making changes to their paper in light of the reviews, and hope that they feel their paper is stronger because of the review process.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General Comments\", \"comment\": \"We thank the reviewers for their time in reading our work and providing feedback. We first provide general comments on the reviewers\\u2019 major concerns and the way we addressed them, followed by detailed replies to each reviewer's individual comments and questions. We also mark the major addition and fix in the margin of the paper as \\u201cNEW\\u201d and \\u201cFIX\\u201d\", \"all_reviewers_agreed_that_the_central_claim_of_this_paper_was_interesting\": \"the superordinate-level structure of labels plays a key role in shaping human-like visual representations. However, the reviewers also pointed out that: 1) our paper could be improved by better situating our study in the context of existing research (R2, R5), and 2) evidence for the main claim was relatively weak and was not exhaustively explored (R1, R2, R5). To address these weaknesses, we made major revisions to the paper:\\n\\n[Addressing related work]\\nWe now have a separate \\u2018related work\\u2019 paragraph as part of the introduction, which we hope better explains the relationship between our work and work in computer vision and the behavioral sciences (e.g., psychology, cognitive neuroscience). The studies covered in the current related work section include:\\n\\n1) studies where semantic label embeddings are used for better image classification\\n2) studies on understanding and uncovering human categorical visual representation\\n\\n[Addressing the main claim]\\nWe revised the introduction and results to more clearly convey the logic underlying the design of our study and the interpretation of the results. We also polished the formatting of figures and tables to increase their readability. We also added additional analyses in the supplementary material further supporting our main claims. Below is a summary of what our main finding is and how it is supported.\\n\\n1) We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is a categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels.\\n\\n2) We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as \\\"musical instrument\\\" and \\\"container\\\" were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer.\\n\\n3) More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver).\"}",
"{\"title\": \"[Addressing suggestions]\", \"comment\": \"1) Add other models in model comparison (e.g, a super-basic model, superordinate wordvec model)\", \"answer\": \"Although we could not collect more data on the suggested design due to the time constraint, this is an interesting suggestion. Thank you!\\n\\nWe thank R2 again for reviews and constructive suggestions and please let us know if we have addressed your concerns, and if there\\u2019s any further concerns or questions.\"}",
"{\"title\": \"[Addressing Questions]\", \"comment\": \"1) R2: \\u201cthe authors found the imagenet categorical representations were most predictive of human judgments in the odd-one-out task. This seems highly unsurprising since (i) the humans saw images from the Imagenet dataset (not THINGS) and (ii) humans leverage semantic information when making similarity judgments.\\u201d\", \"answer\": \"We agree with your observation and in fact we did expect that the models trained with distributed word vectors as targets would perform better than one-hot vector models for the reason the reviewer mentions. However, it turns out the performance of wordvec model was similar or lower than superordinate, which means, in turn, superordinate model was often as good or better as the model with word vectors. As described above, we think this is surprising, given that superordinate label one-hot vectors only give coarse-grained supervision (dim =10) compared to other basic label vectors (dim=30), combined vector (dim=40) or wordvectors (dim =300)\"}",
"{\"title\": \"We thank R2 for pointing out the paper's weaknesses and providing interesting questions/suggestions\", \"comment\": \"We thank R2 for pointing out the paper's weaknesses and providing interesting questions/suggestions that would be very helpful for exploring our idea. Below are our comments answering R2\\u2019s main concerns and explaining how we addressed them in the revised paper. We also provided answers to other questions of R2 in detail below.\\n\\n[Addressing the main claim]\\nOne of R2's main concerns is that despite the idea of studying the effect of linguistic labels on visual representation being interesting, most of the findings were obvious and not surprising. We understand our work has limitations, but we also think we were not effective enough at explaining what we found in the current dataset and analysis in the previous draft. In order to address this concern, we made a serious revision on the introduction and results to clearly convey our main findings. We also polished the formatting of figures and tables to increase the readability. Below is a summary of the findings for our main claim:\\n\\n1) We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is a categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels.\\n\\n2) We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as \\\"musical instrument\\\" and \\\"container\\\" were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer.\\n\\n3) More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver).\\n\\n[Addressing related work]\\nAs suggested by R2 and other reviewers, we have made serious efforts to revising the introduction and adding a related work paragraph to better explain why this research is meaningful in the context of existing research. The studies covered in the current related work paragraph include:\\n\\n1) studies where semantic label embeddings are used for better image classification\\n2) studies on understanding and uncovering human categorical visual representation, which includes more discussions about the paper R2 mentioned (Peterson et al., 2018).\"}",
"{\"title\": \"We thank R5 for identifying the paper's strengths and weaknesses\", \"comment\": \"We thank R5 for identifying the paper's strengths and weaknesses and providing suggestions for improvement. Below are replied to R5\\u2019s concerns, and a summary of how we addressed them in the revised paper.\\n\\n[Addressing related work]\\nAs suggested by R5 and other reviewers, we have made serious efforts to revising the introduction and adding a related work paragraph to better explain why this research is meaningful in the context of existing research. The studies covered in the current related work paragraph include:\\n\\n1) studies where semantic label embeddings are used for better image classification\\n2) studies on understanding and uncovering human categorical visual representation\\n\\nWe think these changes answer the question asked by R5 in a minor comment, about how our study can benefit both computer vision and behavioral science (e.g., psychology, neuroscience). Our study broadly benefits both computer vision and behavioral science (e.g., psychology, neuroscience) by suggesting that the semantic structure of labels and datasets should be carefully constructed if the goal is to build vision models that learn visual features representations having the potential for human-like generalization. For behavioral science, this research provides a useful computational framework for understanding the effect of training labels on the human learning of category relationships in the context of thousands of naturalistic images of objects. \\n\\n[Addressing the main claim]\\nWe thank R5 for pointing out that the evidence supporting our main claim is relatively weak to strongly support our main claim that superordinate labels play a key role in shaping human-like visual representation. Our work has limitations in that our findings are limited to current methods and data, but we also think we were not effective enough at explaining what we found in the current dataset and analysis in the previous draft. In order to address this concern, we made a serious revision on the introduction and results to clearly convey our main findings. We also polished the formatting of figures and tables to increase the readability. Below is a summary of the findings for our main claim:\\n\\n1) We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is a categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels.\\n\\n2) We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as \\\"musical instrument\\\" and \\\"container\\\" were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer.\\n\\n3) More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver).\\n\\n[Others]\\nWe filled in the missing figure of Supplementary 7.5 that R5 mentioned as the last major concern. We also polished the paper to fix all the spelling and grammatical errors that R5 pointed out as minor points. Thank you for pointing these out.\\n\\nWe thank R5 again for the review and please let us know if we have addressed your concerns, and if there\\u2019s any further concerns or questions.\"}",
"{\"title\": \"We thank R1 for providing feedback to improve the work\", \"comment\": \"We thank R1 for providing feedback to improve the work. Below are our comments answering R1\\u2019s main concerns and explaining how we addressed them in the revised paper.\\n\\n[Addressing the main claim]\\nAlthough R1 found the idea of studying the effect of linguistic labels on visual representation interesting, there was a concern that current analysis might have missed a meaningful structure that the model trained with no label (e.g.,Conv. Autoencoder) have. We agree that this is a possibility and that the results may vary depending on which metric and methods were used.\\n\\nTo address this issue, we visualized and analyzed the visual representations in multiple ways as possible. \\n\\n1) In addition to the cosine similarity matrix, we also added a T-SNE plot in the Supplementary 7.5, which was mistakenly missing from the previous paper. Both visualizations showed Conv. Autoencoder visual representations are poorly discriminable not only basic level categories but also superordinate level categories. \\n2) We added in the supplementary 7.4 three more metrics (Silhouette Coefficient, Calinski-Harabasz Index, Davies-Bouldin Index) to analyze the discriminability of the visual representations for each category, in addition to the existing Between-to-within class variance metric. All metrics showed that Conv. Autoencoder has the poorest clustering quality. \\n\\nResults above together supported that the visual input alone is not sufficient to produce any clusterable structure, not to mention category representations. Of course, it\\u2019s still possible that some aspects of the learned visual representations cannot be identified using our current visualization methods, but even if this were the case our main claim would still hold that:\\n\\n1) Guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels for shaping semantically structured visual representation.\\n2) Human representations used in an odd-one-out task are highly modulated by semantic information, especially at the superordinate level\\n\\n[Addressing related work]\\nAlthough R1 did not mention this as a major concern, we made major revisions (we hope for the better!) in describing how our work is situated in the context of existing research. There is now a separate paragraph that reviews related work, and that explains how this work contributes to both computer vision and behavioral science (e.g., psychology, neuroscience). The studies covered in the current related work section include:\\n\\n1) studies where semantic label embeddings are used for better image classification\\n2) studies on understanding and uncovering human categorical visual representation\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #5\", \"review\": \"Summary: This paper demonstrates the importance of labels at various levels (no label, basic level label, and superordinate level) as well as in combination to determine the importance of semantic information in classification problems. They train an identical CNN architecture either as an autoencoder (no labels), with the basic label, with the subordinate label, with the basic and subordinate labels, and with basic labels which are fine-tuned with one-hot encodings of superordinate labels, as well as with word vectors. Classification accuracy, t-SNE, cosine similarity matrices and predictions on a human behavior task are used to evaluate the differences across labels types. The authors find that superordinate labels are helpful and important for classification problems.\", \"major_comments\": [\"Authors need to include more related work and describe the main related paper they mention (Peterson et al 2018) as well as describe how their work fits in with previous work\", \"While the idea here is novel and impactful, the experiments used to explain the importance of superordinate labels do have not much compelling information and are not well described\", \"4.2 plots for visualization are mentioned to be in the appendix, but are not there\"], \"minor_comments\": \"-\\tFig2 large subordinate group text would help\\n-\\tLots of typos throughout and grammar mistakes \\no\\tTypo \\u2018use VGG16\\u2019 and then \\u2018Vgg16\\u2019 in same paragraph bottom of page 4 \\no\\tTypo top of page 2 \\u201cConvolutional neural network(CNN)\\u201d\\no\\tAppendix list \\u2013 \\u2018banna\\u2019 typo under Fruit\\no\\tPage 1 intro \\u2018for both behavioral and computer vision\\u2019 doesn\\u2019t really make sense \\no\\tPage 3 top section \\u2018new one\\u2019 should be \\u2018new ones\\u2019 \\no\\tBottom of page 3 \\u2018room from improvement\\u2019 \\no\\tLast line of conclusion \\u2013 \\u2018classificacation\\u2019\", \"consensus\": \"This is a very interesting and potentially impactful idea, but the experiments used to defend and explain the importance of superordinate labels are relatively weak. Significant work on writing and experimental side should be complete, but because this is novel and important work for classification, with some serious revisions, I would suggest accepting this paper.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper assesses the effects of training an image classifier with different label types: 1-hot coarse-grained labels (10 classes), 1-hot fine grained labels (30 labels which are all subcategories of the 10 coarse-grained categories), word vector representations of the 30 fine-grained labels. They also compare the representations learned from an unsupervised auto-encoder. They assess the different representations through cosine similarity within/between categories and through comparison with human judgments in an odd-one-out task. They find that (i) the auto-encoder representation does not capture the semantic information learned by the supervised representations and (ii) representations learned by the model depend on the label taxonomy, how the targets are represented (1-hot vs. wordvec), and how the model is trained (e.g. fine-grained then coarse grained stages), (iiii) the different representations predict human judgements to differing degrees. the first finding is obvious and I'm not even sure why it needs to be stated -- of course semantics of images are not inherently encoded in the pixels of an image! The second point again, is not surprising . This paper starts to get at some interesting questions but does not follow through. It is also quite confusing to read despite thee simple subject matter. This paper is also missing a related work section! There has been so much word on adding structure to the label space of image classifiers (e.g. models that learn image/text embedding space jointly, models that predict word vectors, graphical model approaches to building in semantic information, etc.) and none of this is discussed. There has also been work on comparing convnet representations to human percepts e.g. https://cocosci.princeton.edu/papers/Peterson_et_al-2018-Cognitive_Science.pdf)and none of this work is discussed! This work needs to be better situated within the context of previous work in this field. Please write a related work section.\", \"Detailed comments/questions:\", \"It would be good to add a super-basic model to table 1 for comparison (i.e. first train of coarse level categories and then fine-tine on the more fine-grained taxonomy).\", \"It would be good to compare the use of word vector representations at both the basic and superordinate levels; the 1-hot vs word vector targets and the basic vs superordinate taxonomy seem like orthogonal axess to explore and I'm not sure why the authors didn't test all combinations.\", \"the authors found the imagenet categorical representations were most predictive of human judgements in the odd-one-out task. This seems highly unsurprising since (i) the humans saw images from the Imagenet dataset (not THINGS) and (ii) humans leverage semantic information when making similarity judgements.\", \"What categories had the least inter-rater agreement.. was there any relationship between these categories and the similarity of representations learned by the convnet?\", \"It seems the odd-one-out comparison always involves averaging image representations at the basic category level. In the case where the items come from three different superordinate classes it would bee interesting to see the results when averaging over superordinate classes as well.\", \"In table 3, what does the FastText column just list \\\"true\\\"/\\\"false\\\" rather than accuracies? I would expect this column to show the accuracy when the FastText embeddings for the three words are used to compute similarity. I don't understand what the \\\"true\\\"/\\\"false\\\" is meant to indicate. Also it's not clear to be what the two rows in table three are meant to correspond to?\", \"The authors claim \\\"Surprisingly, the kind of supervised input that proved most effective in matching human performance on the triplet odd-one-out task was training with superordinate labels\\\". This should be qualified to say that, when there are two or more superordinate classes represented in the triplet, the superordinate labels are highly effective when the three items come from three different superordinate classes. I'm also not clear why this would be surprising? Could the authors elaborate?\", \"I'm surprised more space isn't given to discussing the wordvec representations since these should capture some of the semantic information that the 1-hot encodings might miss. In fact, the word vectors targets seem to perform as good as or close to the other representations on the odd-one-out\", \"In short, I really like the overall idea of comparing convnet representations with human perceptions of images. However, this work barely scratches the surface of what could be done here and mostly reveals incredibly obvious results. There are so many interesting questions to ask regarding the relationship between how humans perceive similarity and what is encoded in a convnet representation. For example, it would have been very interesting to test the effects of asking the human rates to cue in on different aspects of the image. Focusing on semantic similarity, visual similarity, etc. would all likely give different ratings.\", \"----------------------------------------------------------\", \"Update (in light of rebuttal)\", \"I appreciate the authors lengthly and considered response. In particular, the updated related work and expansion of the empirical experiments. While I am more comfortable with this paper being accepted that previously (and have updated by score to \\\"weak accept\\\" to reflect this), I still think the paper has a lot of room for improvement. In particular, I suggest a more expansive analysis of human perceptions and a discussion f the implications of the findings.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors conduct a comparative study of several variants of CNNs trained on imagenent things category with different types of labeling schemes (direct, superordinate, word2vec embedding targets, etc.) They also use a human judgement dataset based on odd-one-out classification for triplets of inputs as comparison to evaluate whether the CNNs are able to capture the linguistic structure in the label categories as determined by the relation of the superordinate labels to the basic labels.\\n\\nThe authors used the t-SNE embeddings to visualize the representations learned and evaluate whether these cluster related classes close enough. Not suprisingly, training with the word2vec targets produced the best representations for similarity between/within category. Interestingly, the autoencoder failed to learn representations that are easily interpretable by the analysis tools they were using. \\n\\nThis is an interesting study. The core claim being made as follows:\\n\\n\\\"The representations learned by the models are shaped enormously by the kinds of supervision the models get suggesting that much of the categorical structure is not present in the visual input, but requires top-down guidance in the form\\nof category labels. \\\"\\n\\nThe fact that the representations being learned are shaped strongly by the supervision is probably not surprising or in contention. However, it is not clear that the representations being learned can be exhaustively interpreted by convenient visualization tools. In my opinion, absence of evidence here is not clearly an evidence of absence. However, I still think these are interesting analyses so I am giving weak accept.\"}"
]
} |
B1eB5xSFvr | DiffTaichi: Differentiable Programming for Physical Simulation | [
"Yuanming Hu",
"Luke Anderson",
"Tzu-Mao Li",
"Qi Sun",
"Nathan Carr",
"Jonathan Ragan-Kelley",
"Fredo Durand"
] | We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators. Based on an imperative programming language, DiffTaichi generates gradients of simulation steps using source code transformations that preserve arithmetic intensity and parallelism. A light-weight tape is used to record the whole simulation program structure and replay the gradient kernels in a reversed order, for end-to-end backpropagation.
We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x shorter than the hand-engineered CUDA version yet runs as fast, and is 188x faster than the TensorFlow implementation.
Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations. | [
"Differentiable programming",
"robotics",
"optimal control",
"physical simulation",
"machine learning system"
] | Accept (Poster) | https://openreview.net/pdf?id=B1eB5xSFvr | https://openreview.net/forum?id=B1eB5xSFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"AzZUG1xaI",
"Hklp2A4Ssr",
"Syln2pNHsB",
"rkgQGp4rjB",
"S1g3334HoS",
"H1loP67jqH",
"ryeQ4y6htH",
"HJgrncv9KB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749770,
1573371572917,
1573371316026,
1573371146841,
1573371059524,
1572711778941,
1571766058800,
1571613357045
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2469/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2469/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2469/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2469/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper provides a language for optimizing through physical simulations. The reviewers had a number of concerns related to paper organization and insufficient comparisons to related work (jax). During the discussion phase, the authors significantly updated their paper and ran additional experiments, leading to a much stronger paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you very much for the helpful writing suggestions. We have adopted your writing strategy. Now the presentation is focused on the DiffSim system itself, i.e. how to build an efficient and easy-to-use automatic differentiation system for physical simulation.\\n\\nWe will add details of every physical simulator in the appendix in the next update, as it may take a while to document 10 different differentiable simulators. Here we briefly answer your questions:\\n\\n** Rigid Body Collision Gradients **\\nIt is true that discontinuities can happen in rigid body collisions, and at a countable number of discontinuities the objective function is nondifferentiable. However, apart from these discontinuities, the process is still differentiable almost everywhere. The situation of rigid body collision is somewhat similar to the \\u201cReLU\\u201d activation function in neural networks: at point x=0, ReLU is not differentiable (although continuous), yet it is still widely adopted. The rigid body simulation cases are more complex than ReLU, as we have not only non-differentiable points, but also discontinuous points. Based on our experiments, in these impulse-based rigid body simulators (rigid_body and billiards), we still find the gradients useful for optimization, despite the discontinuities, especially with our time-of-impact fix. \\n\\n\\n** Pressure Projection Gradients in Incompressible Fluids **\\nWe followed the baseline implementation in Autograd, and used 10 Jacobi iterations for pressure projection. Technically, 10 Jacobi iterations are not sufficient to make the velocity field fully divergence-free. However, in this example, it does a decent job, and we are able to successfully backpropagate through the unrolled 10 Jacobi iterations.\\n\\nIn larger-scale fluid simulations, 10 Jacobi iterations are likely not sufficient. Assuming the Poisson solve is done by an iterative solver (e.g. multigrid preconditioned conjugate gradients, MGPCG) with 5 multigrid levels and 50 conjugate gradient iterations, then automatic differentiation will likely not be able to provide gradients with sufficient numerical accuracy across this long iterative process. The accuracy is likely worse when conjugate gradients present, as they are known to numerically drift as the number of iterations increases. In this case, the user can still use DiffSim to implement the forward MGPCG solver, while implementing the backward part of the Poisson solve manually, likely using adjoint methods [1]. DiffSim provides \\u201ccomplex kernels\\u201d to override the built-in AD system, as shown in appendix C.\\n\\n\\nWe have included the above discussions in Appendix E to help future readers, although now we focus more on the system perspective in the main paper.\\n\\nPlease also find the detailed paper change log in our general response to all reviewers. Thank you again for your time and very constructive feedback!\\n\\nBest,\\nAuthors\\n\\n[1]``What is an adjoint model?\\u2019\\u2019 by Ronald M Errico, in Bulletin of the American Meteorological Society, 1997\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for the helpful feedback. We have reorganized the paper to make it more focused on the DiffSim automatic differentiation system design alone, instead of multiple topics. The paper is now 8 pages instead of 10.\\n\\n** Building on Top of Taichi**\\nThe main goal of DiffSim is to simplify the process of making existing physical simulators differentiable, and we reuse Taichi because Taichi is very suitable for building forward simulators. We indeed reused some infrastructure of Taichi, but such reuse also poses a unique challenge to redesign a tailored automatic differentiation system for it, which a) does not harm the performance of Taichi programs and b) needs minimal code modification to make a Taichi program differentiable. To this end, we have developed a tailored two-scale AD system that is high-performance and imposes minimal global data access restrictions to Taichi programs. Please check out the new section 3 for more details.\\n\\n**Comparison with JAX**\\nWe have added JAX with GPU backend to the smoke simulation benchmark, and DiffSim GPU is 1.9x faster than JAX GPU, despite that this grid-based simulation benchmark is slightly biased towards differentiable array programming systems such as Autograd and JAX. The whole program takes 10 seconds to run in DiffSim on a GPU, and 2 seconds are spent on JIT. JAX JIT compilation takes 2 minutes. In Appendix A, we also added a table that comprehensively compares DiffSim with 7 other existing systems. \\n\\n**Gradient Quality Discussions**\\nWe removed the discussion on gradient explosion, and moved the rigid body gradient issue to the evaluation section along with the rigid body simulator, to limit the scope of this gradient discussion to rigid body simulations.\\n\\nPlease also find the detailed paper change log in our general response to all reviewers. Thank you again for your time and feedback!\\n\\nBest,\\nAuthors\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for the positive feedback. The question about simulation fidelity is very interesting. DiffSim is as expressive as traditional languages such as C++/Fortran in building physical simulators, so it can achieve the fidelity level of previously build simulators. In order to simulate a 7-DoF robotic arm, a differentiable rigid body simulator written in DiffSim should be used to train the controller, but transferring the controller to a real robot would face a sim2real gap, just as physical simulators written in any other language. DiffSim does not directly address this gap, but it does significantly reduce code complexity and would allow researchers to develop more realistic simulators with the same amount of work. Similarly, DiffSim does not resolve the numerical accuracy issue caused by a finite time step size, but it does improve the program performance to allow users to run simulations with smaller time step sizes and higher spatial resolution and thereby smaller discretization errors.\", \"we_have_also_fixed_the_minor_issues_in_the_revision\": [\"(Page 3) k is spring stiffness.\", \"(Page 4) We used mass = 1 throughout the mass-spring simulation.\", \"(Fig. 8) The x-axes are initial height.\", \"(Fig. 10) Thanks for pointing out the typo in \\u201cGradient Explosion with Damping\\u201d. As suggested by reviewer 2, we have removed the discussion on gradient explosion.\", \"Please also find the detailed paper change log in our general response to all reviewers. Thank you again for your time and feedback.\", \"Best,\", \"Authors\"]}",
"{\"title\": \"General Response to All Reviewers: Paper Reorganized\", \"comment\": \"Dear Reviewers,\\n\\nThank you so much for the constructive feedback! We have reorganized the paper according to your suggestions. The paper now focuses on the DiffSim system itself, i.e. how to build an efficient and easy-to-use automatic differentiation system for physical simulation. The discussion on gradient behavior is demoted to be part of system evaluation. The paper is now 8 pages instead of 10. Pdf link: https://openreview.net/pdf?id=B1eB5xSFvr\", \"structural_reorganization\": [\"The introduction is updated to suit the new paper structure.\", \"Moved the background on Taichi from Appendix to the main text as section 2.\", \"Added section 3 (Automatically Differentiate Physical Simulation), which details automatic differentiation on Taichi. Two key components are local automatic differentiation within kernels, and global AD across kernels using a light-weight tape. Part of the old Appendix B (Compiler Design and Implementation) is now promoted to this section.\", \"Section 4 (Evaluation) now focuses on only 3 examples instead of 10. The remaining 7 examples are moved to Appendix. We will also include the implementation details of all the simulators in Appendix in a later update.\", \"The old section 4 (Robust Gradients) is significantly shortened and merged into the new section 4. Discussions of the gradient explosion issue are now removed. The only discussion of gradients left in the main text is rigid body collision gradients.\"], \"minor_changes\": \"- As suggested by R2, we have made a comparison with JAX as well. We reran the whole smoke benchmark with 6 Jacobi iterations (used to be 10), since JAX crashes when we use 7 or more iterations. Also, we have improved the DiffSim code generation and therefore its performance is improved.\\n - Fixed typos as pointed out by R3.\\n - In Appendix, we added a table that comprehensively compares DiffSim with 7 other existing systems. We also added examples of complex kernels (Appendix C) and checkpointing (Appendix D).\\n\\nWe have addressed comments from every reviewer in separate replies. Please also check that out. Thank you again for your suggestions, and we are happy to further improve the paper if you have any new advice! :-)\\n\\nBest,\\nAuthors\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Summary*\\nThis paper describes DiffSim, a differentiable programming system for learning with physical simulation. The system (built on the Taichi system) allows users to specify a forward simulation in a Python-like syntax, after which the program is compiled and iteratively run in both forward-mode and gradients computed for system parameters and controllers, as desired. A variety of simple simulations are included, demonstrating that the automatically generated CUDA code runs as fast as hand-written CUDA code (and noticeably faster than TensorFlow or PyTorch implementations), while requiring far fewer lines of code. The final section details two issues--time of impact errors due to discrete time intervals and gradient explosions with long time horizons--and some potential solutions.\\n\\n*Rating*\\nThe paper is interesting and easy to read. While some part of the underlying functionality of DiffSim is directly derived from previous work (Taichi), the paper does describe a non-trivial contribution.\\n\\nI lack the background to comment constructively about expectations for these simulations or the fidelity of the methods in this paper. What evidence can you offer regarding the physical fidelity achievable and how that relates to issues of scalability, gradient behavior, size of time steps, code complexity, etc.? For a sense of context, what might be needed to simulate a 7 DoF robotic arm and learn a controller that would reasonably transfer to a real robot?\\n\\nOverall, I'm optimistic about this paper, and would tend to vote for acceptance.\\n\\n*Notes*\", \"pg3\": \"define k (spring stiffness?)\", \"pg4\": \"what is the value of 'mass' for this simulation?\", \"fig_8\": \"what is the x-axis in the two right plots? initial height?\", \"fig_10\": \"right plot title should probably be \\\"Gradient Explosion with Damping\\\"\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a programming language for building differentiable physics simulators. This is a very interesting goal, as differentiable systems are a crucial building block for many deep learning methods and similar optimization techniques.\\n\\nThe system presented by the authors is certainly impressive. Unfortunately, the paper itself covers a wide range of topics, and consists of an overview of the language with a programming tutorial, a collection of ten results, and a brief discussion of problems when computing gradients. \\n\\nThe core of the proposed work, the programming language seems to be quite powerful. However, it seems to be built on an existing system, which was published as a programming language for simulation in this years siggraph asia conference (Taichi: A language for high-performance computation on spatially sparse data structures. In SIGGRAPH Asia 2019 Technical Papers, pp. 201. ACM, 2019a). This ICLR submission seems to extend this system to build and provide gradient information automatically along with the simulation itself. There seem to be few technical challenges here, and many aspect discussed in section 2 are shared with the original simulation language.\\n\\nThe examples cover a nice range of cases, from simple mass spring systems and a rendering case to complex 3d simulations. Here, I was a bit surprised that the paper only compares to autograd, which has been succeeded by jax. The latter also provides a compiler backend to produce GPU code with gradients, and as such seems very closely related to the proposed language. From the submission, it's hard to say which version has advantages. The examples seem to be a sequence of demos of the language, rather than illustrating different technical challenges or improvements for a scientific conference. Or at least a discussion of these differences is currently missing in the text.\\n\\nSection four also mostly gives the impression of a loose discussion. The gradients for rigid body impacts are interesting, but seem relevant only for a subset of 2D examples shown in the paper. The discussion of gradient explosions is quite ad-hoc, and would be stronger with a more detailed analysis.\\n\\nThe submission as a whole aims for a very interesting direction, but I think the paper would benefit from focusing on a certain range of problems, such as the rigid body control cases, in conjunction with topics such as the improved gradients. Instead, the current version tries to combine this topic with a systems overview, a tutorial and loosely related discussions. Combined with the length of 10 pages, I think the work could use a revision rather than being accepted in its current form.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces DiffSim, a programming language for high-performance differentiable physics simulations. The paper demonstrates 10 different simulations with controller optimization. It shows that the proposed language is easier to use and faster than the other alternatives, such as CUDA and TensorFlow. At the end, the paper provides insightful discussions why the gradient of the simulation could be wrong.\\n\\nDifferentiable physics simulation is an important research area, especially for optimal control and reinforcement learning. While I am impressed by the large variety of examples demonstrated in the paper, I am leaning towards rejecting the paper because of its poor presentation. The paper only gives a simple and high-level example of the language (optimizing the rest length of springs that form a triangle), very brief descriptions of 10 examples and some discussions about the difficulty of computing useful gradients, but without any in-depth discussion how everything is implemented. This is not enough for an ICLR paper. For example, the paper does not answer some of the fundamental problems of differentiable physics. For example, collision and contact are inherently non-differentiable. How does the paper handle it in the examples of locomotion and billiards (Figure 4)? In addition, how does the paper back-propagate the gradient through the incompressibility conditions (Poisson solve) of fluid simulation? \\n\\nHere is my suggestions how to improve the writing. There are several ways to write the paper, with different emphasis. If this paper is more about introducing a new programming language, Appendix B Compiler Design and Implementation would be important and should be moved to main text. If the paper want to emphasize how to handle the non-differentiable cases of the simulation, then detailed derivations of contact, collision, and linear/nonlinear solving (due to incompressibility conditions or implicit integrators) should be presented. If the paper would like to demonstrate how differentiable physics simulation can help with controller optimization, then two to three examples, such as the locomotion control for soft bodies or rigid bodies, should be analyzed in far more details, and compared with traditional method without differentiable simulation. It is good to focus on one of the above points, based on the venue that this paper is submitted to. Currently, the paper is trying to touch all three. But due to the page limit, it is not thorough, or detailed in any one of them.\\n\\n-------------------------Update after rebuttal------------------------------\\nThank you for the revision of the paper and the additional comparisons with Jax. The revised version reads much better. The response and the revision addressed most of my concerns. Thus, I raised my rating to weak accept.\"}"
]
} |
HJg4qxSKPB | Implicit Rugosity Regularization via Data Augmentation | [
"Daniel LeJeune",
"Randall Balestriero",
"Hamid Javadi",
"Richard G. Baraniuk"
] | Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks. Unlike classical machine learning algorithms, deep networks typically operate in the overparameterized regime, where the number of parameters is larger than the number of training data points. Consequently, understanding the generalization properties and the role of (explicit or implicit) regularization in these networks is of great importance. In this work, we explore how the oft-used heuristic of data augmentation imposes an implicit regularization penalty of a novel measure of the rugosity or “roughness” based on the tangent Hessian of the function fit to the training data. | [
"deep networks",
"implicit regularization",
"Hessian",
"rugosity",
"curviness",
"complexity"
] | Reject | https://openreview.net/pdf?id=HJg4qxSKPB | https://openreview.net/forum?id=HJg4qxSKPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ucM05O_9nW",
"HkgycV15sS",
"ryg5HmJ5iS",
"rke91XJ5jH",
"HJgBtISK5r",
"Syl3tBeAtH",
"rJlfN3TjtS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749739,
1573676166751,
1573675842004,
1573675745750,
1572587132710,
1571845507706,
1571703850319
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2467/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2467/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2467/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2467/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper aims to study the effect of data augmentation of generalization performance. The authors put forth a measure of rugosity or \\\"roughness\\\" based on the tangent Hessian of the function reminiscent of a classic result by Donoho et. al. The authors show that this measure changes in tandem with how much data augmentation helps. The reviewers and I concur that the rugosity measure is interesting. However, as the reviewer mention the main draw back of this paper is that this measure of rugosity when made explicit does not improve generalization. This is the main draw back of the paper. I agree with the authors that this measure is interesting in itself. However, I think in its current form the paper is not ready for prime time and recommend rejection. That said, I believe this paper has a lot of potential and recommend the authors to rewrite and carry out more careful experiments for a future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their positive comments and useful suggestions. We provide the following response.\\n\\n1> How is the rugosity as a smoothness measurement for neural networks with piecewise affine activations different from the Lipschitz constant for general neural networks?\", \"two_main_features_of_our_measure_distinguish_it_from_the_lipschitz_constant\": \"1) Rugosity depends on the Hessian of the function generated by the deep network, which is a second-order smoothness measure.\\nIn contrast, the Lipschitz constant is a first-order measure that depends on the first derivative (Jacobian) of the network. In other words, rugosity quantifies how much the function generated by a deep network differs from an affine mapping over the input space. We believe rugosity provides a better measure for complexity, especially when the network consists of continuous piecewise linear activations that result in a continuous piecewise linear prediction function. 2) In many applications, for example when the input data consists of natural images, the training data points $x_i \\\\in \\\\mathbb{R}^{D}$ lie on a lower-dimensional manifold $M$ of dimension $d \\\\ll D$. We can exploit this local geometrical structure to evaluate the prediction mapping $f$ as a function of the manifold local coordinates and compute the rugosity on the data manifold. This result is a natural data-driven complexity measure for $f$ that evaluates its complexity over the signal space of importance. In contrast, the standard Lipschitz constant considers the entire input space $\\\\mathbb{R}^{D}$, which might be not relevant. \\n\\nIn addition, as we demonstrate through our empirical results in Section 4.1 and Table 1, for classification tasks on the CIFAR10 and SVHN datasets, rugosity better reflects the difference between training with and without data augmentation. Table 1 shows that data augmentation reduces $\\\\widetilde C$ when used for training on these datasets but has no effect (or increases) the Jacobian measure $J$. This suggests that rugosity is a more informative complexity measure than the Jacobian. \\n\\n2> There have been many recent studies on showing that gradient penalty / Lipschitz regularization are useful for achieving better generalization and adversarial robustness. The results in this paper on showing that regularizing rugosity does not improve accuracy seem to contradict with the conclusion of these prior studies. It is unclear to me whether this is caused by insufficient experimentation or if there is any fundamental difference between rugosity and Lipschitz regularization that I am missing.\\n\\nWe thank the reviewer for mentioning these interesting works. We plan to apply our rugosity measure as an explicit regularization penalty in the settings discussed in these papers in order to see the effect on generalization and adversarial robustness in our revised paper. In fact, we suspect that using rugosity as an explicit regularization should improve adversarial robustness, and we will include an empirical study of this hypothesis in our next revision. In addition, we should note that there are results in the literature confirming our observation that using the Jacobian as explicit regularization does not have a significant effect on generalization: Hoffman et al., ``Robust Learning with Jacobian Regularization.''\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for positive comments and useful suggestions. We provide the following responses for the comments.\\n\\n1> The main contribution of the paper, in my view, is the suggestion of using rugosity as a explicit regularization for training Neural Networks. Nevertheless, all the results in the paper show a negative impact of this on the test accuracy which is contradicting to the proposition.\\n\\nWe agree that we did not observe a significant improvement in generalization error as a result of using rugosity as an explicit regularization in our experiments. However, this is not the only contribution of our paper. As we show in the paper, rugosity provides a data-driven complexity measure for the prediction function of the deep network that can help demystify generalization and (implicit) regularization. As an example, we show how rugosity can be used to understand the effects of data augmentation. We plan to perform further exploration on the effects of using rugosity as an explicit regularization in our revised paper. For example, we expect that using rugosity as explicit regularization can improve adversarial robustness. \\n\\n2> The difference in finding (Table 1) between the CNN and ResNet networks can be more discussed. Additional tasks (like regression) or even toy examples can be useful in further explaining the connection between rugosity and generalization to test data.\\n\\nWe appreciate these useful suggestions. We agree that these points require further examination, and we will include this further analysis in our revision.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for positive comments and useful suggestions. We provide the following responses for the comments.\\n\\n1> The definition of rugosity is an extension of (Donoho & Grimes (2003)) in which the extension is not really improving anything or used anywhere in the paper.\\n\\nThe rugosity measure that we defined and used is in fact inspired by the tangent Hessian measure proposed by (Donoho & Grimes (2003)). We have modified the tangent Hessian integral measure provided in this paper to make it suitable to use for piecewise linear functions and for making it easier to compute. We believe that this measure can provide useful information about the landscape and complexity of the prediction function generated by the deep network.\\n\\n2> Data augmentation improves the generalization on deep learning models. This paper shows that DA induces rugosity (theorem 1), but rugosity does not improve generalization (empirically). Thus, rugosity is not responsible for generalization, which is the interesting property that we care about. The Hessian-based rugosity analysis of DA is correct, but it does not help to understand the generalization performance or any other useful property of DA.\\n\\nWe agree that our empirical results do not show significant classification performance improvement as a result of using rugosity as explicit regularization. However, the effect of data augmentation on rugosity (and also generalization) is a significant observation and points to the question of what other properties data augmentation possesses that lead to improve generalization. We believe that this is an important question that requires a more thorough and comprehensive understanding of regularization in the overparameterized (interpolating) regime. What our results suggest is that, although there is a close connection between rugosity and data augmentation, rugosity (or smoothness) cannot by itself explain the entire effect of data augmentation on generalization. \\n\\nIn addition, understanding generalization is not the primary goal of our paper. We believe that our rugosity measure can be a very useful data-driven measure for understanding the prediction function generated by a deep network. One of our applications was understanding data augmentation, which we illustrated through theoretical and empirical analysis. We showed the close connection between data augmentation and rugosity which suggests that rugosity can be a more effective complexity measure than other common complexity measures, e.g., the Jacobian. Further, there can be many other properties of deep networks, such as adversarial robustness, that rugosity can help us to better understand and improve. We believe that our work is only the first step in this direction. \\n\\n3> In 3.4 second paragraph the authors suggest that reducing rugosity can improve generalization as DA, but later we see that this is not the case.\\n\\nThe effect of rugosity as explicit regularization on improving generalization is what can be initially expected from Theorem 1 and the empirical results showing the connection between data augmentation and rugosity. However, surprisingly, this is not what we observed in our further experiments. This suggests that understanding generalization requires a more comprehensive understanding of properties (other than just the rugosity or smoothness) of the function generated by a deep network.\\n\\n4> The entire paper seems written with the idea of using rugosity as a surrogate of DA, but at the end it does not work.\\n\\nWe do not suggest to use rugosity as a surrogate for data augmentation. Rather, we propose rugosity as a useful, data-driven measure for studying a deep network and the complexity of the prediction function it produces. While we showed that rugosity has a close connection to data augmentation, our experiments show also that rugosity, by itself, cannot completely explain the generalization properties of deep nets or data augmentation.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper shows that a penalty term called rugosity captures the implicit regularization effect of deep neural networks with ReLU (and piecewise affine in general) activation. Roughly, rugosity measures how far the function parametrized as a deep network deviates from a locally linear function.\\n\\nThe paper starts by showing that the amount of training loss increased from adding data augmentation is upper bounded in terms of (roughly) a Monte Carlo approximate to a Hessian based measure of rugosity. It then formally derives this measure of rugosity for networks with continuous piecewise affine activations. Finally, experimental evaluation for classification tasks on MNIST, SVHN and CIFAR shows that data augmentation indeed reduces the rogusity by a significant amount particularly when using the ResNet structure. A somehow surprising message is, however, that if one imposes explicit regularization with rugosity in lieu of data augmentation, then the better generalization usually seen from data augmentation no longer presents, though one does get a network with smaller rugosity.\", \"comments\": \"It is quite interesting to see that the rugosity measure proposed in the paper captures at least some aspects of the implicit regularization effect of data augmentation both in terms of theory (i.e. Theorem 1) and practical observations. My feeling is that rugosity is mostly a measure of the smoothness of the function parametrized by the neural network. From that perspective, how is the rugosity as a smoothness measurement for neural networks with piecewise affine activations different from the Lipschitz constant for general neural networks? My guess is that data augmentation also decreases the Lipschitz constant of a neural network near the training data points, but regardless of whether this is true or not, it is not clear if and how rugosity is better than Lipschitz constant for characterizing the implicit regularization of data augmentation. \\n\\nIn addition, there have been many recent studies on showing that gradient penalty / Lipschitz regularization are useful for achieving better generalization and adversarial robustness, see e.g. [a,b,c]. The results in this paper on showing that regularizing rugosity does not improve accuracy seem to contradict with the conclusion of these prior studies. It is unclear to me whether this is caused by insufficient experimentation or if there is any fundamental difference between rugosity and Lipschitz regularization that I am missing.\\n\\n[a] Finlay et al., Lipschitz regularized deep neural networks generalize and are adversarially robust\\n[b] Gouk et al., Regularisation of Neural Networks by Enforcing Lipschitz Continuity\\n[c] Thanh-Tung et al., Improving generalization and stability of GANs\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper aims to explain the regularization and generalization effects of data augmentation commonly used in training Neural Networks.\\nIt suggests a novel measure of \\\"rugosity\\\" that measures a function's diversion from being locally linear and explores the connection between data augmentation and the decrease in rugosity.\\nIt further suggests the explicit use of rugosity measure as a regularization during training to replace need for data augmentation.\\nThe paper is very well written and both the positive and negative findings are clearly presented and discussed.\", \"cons\": [\"The main contribution of the paper, in my view, is the suggestion of using rugosity as a explicit regularization for training Neural Networks. Nevertheless, all the results in the paper show a negative impact of this on the test accuracy which is contradicting to the proposition.\", \"This result has been discussed in section 5 but without much evidence to the explanations mentioned. The connection is very interesting but I believe further work is needed to explain those negative results on test accuracy.\", \"The difference in finding (Table 1) between the CNN and ResNet networks can be more discussed.\", \"Additional tasks (like regression) or even toy examples can be useful in further explaining the connection between rugosity and generalization to test data.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper shows (theorem 1) that data augmentation (DA) induces a reduction of rugsity on the loss function associated to the model. Here rugosity is defined as a measure of the curvature (2nd order) of the function. However, the two concepts seems to be different because the authors empirically show that directly reducing the rugosity of a network does not improve generalization (in contrast to DA).\\n\\nI lean to reject this paper because the contributions, even if interesting, do not lead to any new understanding of the topic. More in detail, data augmentation improves the generalization on deep learning models. This paper shows that DA induces rugosity (theorem 1), but rugosity does not improve generalization (empirically). Thus, rugosity is not responsible for generalization, which is the interesting property that we care about.\\n\\nThe paper is well written and easy to follow, however I found the actual contribution limited because:\\n- The definition of rugosity is an extension of (Donoho & Grimes (2003)) in which the extension is not really improving anything or used anywhere in the paper.\\n- The Hessian-based rugosity analysis of DA is correct, but it does not help to understand the generalization performance or any other useful property of DA.\", \"additional_comments\": [\"In 3.4 second paragraph the authors suggest that reducing rugosity can improve generalization as DA, but later we see that this is not the case.\", \"The entire paper seems written with the idea of using rugosity as a surrogate of DA, but at the end it does not work\"]}"
]
} |
Syx79eBKwr | A Mutual Information Maximization Perspective of Language Representation Learning | [
"Lingpeng Kong",
"Cyprien de Masson d'Autume",
"Lei Yu",
"Wang Ling",
"Zihang Dai",
"Dani Yogatama"
] | We show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing). | [
"methods",
"language representation",
"mutual information",
"sentence",
"computer vision",
"word representation",
"objective function",
"lower bound",
"different parts"
] | Accept (Spotlight) | https://openreview.net/pdf?id=Syx79eBKwr | https://openreview.net/forum?id=Syx79eBKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fTAWD1xiNf",
"r4k0jo0fy9",
"OFTf9q2VP_",
"KT8aePozXB",
"SklzdV82jr",
"HyxUKCS2oB",
"r1xKd5Z7or",
"rJlgPc-mjB",
"B1eMJ9-msH",
"Skx9YZG-qB",
"H1eWGbyRYB",
"Hyemr5VVtB"
],
"note_type": [
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1582574028675,
1581687650345,
1577110004901,
1576798749711,
1573835882097,
1573834366495,
1573227121109,
1573227095970,
1573226970331,
1572049281625,
1571840265499,
1571207738784
],
"note_signatures": [
[
"~Martin_Ma1"
],
[
"ICLR.cc/2020/Conference/Paper2466/Authors"
],
[
"~Zhengyan_Zhang1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2466/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2466/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2466/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2466/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2466/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Code for this paper\", \"comment\": \"Dear authors,\\n\\nGreetings!\\n\\nMay I ask will the code for this paper be possibly public?\\n\\nThank you so much!\"}",
"{\"title\": \"reply\", \"comment\": \"Thanks for the comments.\\n\\nFor 1, yes. The \\\\hat{x_{i:j}} follows the masking budget (15% of the sequence length) and there are several masked n-grams in this sentence.\\n\\nFor 2, in MLM g_{\\\\psi} is a simple lookup same as in the original BERT.\"}",
"{\"title\": \"Questions about I_{DIM} and I_{MLM}.\", \"comment\": \"Dear authors,\\n\\nThank you for the interesting paper and congratulations on the acceptance at ICLR.\\n\\nAfter reading, I have two questions about I_{DIM} and I_{MLM}:\\n1. According to the paper, \\\\hat{x_{i:j}} in I_{DIM} is a sentence masked at position i to j. I wonder whether \\\\hat{x_{i:j}} follows the masking budget (15% of the sequence length) and there are several masked n-grams in this sentence.\\n2. In I_{MLM}, does g_{\\\\psi} give the contextualized word embeddings from the final layer? If so, I think the model should compute the masked sentence for g_{\\\\omega} and unmasked sentence for g_{\\\\psi} simultaneously.\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper explores several embedding models (Skip-gram, BERT, XLNet) and describes a framework for comparing, and in the end, unifying them. The framework is such that it actually suggests new ways of creating embeddings, and draws connections to methodology from computer vision.\\n\\nOne of the reviewers had several questions about the derivations in your paper and was worried about the paper's clarity. But all of the reviewers appreciated the contributions of the paper, which joins multiple seemingly disparite models under into one theoretical framework.\\n\\nThe reviewers were positive about the paper, and in particular were happy to see the active response of authors to their questions and willingness to update the paper with their suggested improvements.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for the clarification. I hope we have answered your question above.\\n\\nRegarding novelty, the main contribution of the paper is a unifying framework of language representation learning models based on mutual information maximization. The framework also allows us to easily construct new self-supervised tasks and take inspirations from similar methods that have been successful in other domains. We use Deep InfoMax as an example to validate this claim, but training objectives derived from other methods such as AMDIM and CPC are also possible.\\n\\nPlease let us know if you have any other questions or concerns, and thank you for helping us improve the submission.\"}",
"{\"title\": \"Official Blind Review #1\", \"comment\": \"Yes, it is.\"}",
"{\"title\": \"response\", \"comment\": [\"Thank you for your thoughtful review.\", \"We have updated the paper based on your comments to improve clarity and reproducibility. We list a summary of our main changes below:\", \"In order to make it easier for readers to understand the differences between different models and how they are related to InfoNCE, we have added a summary in Table 1.\", \"We have improved notations by adding explicit definitions before they are used in Section 2 and Section 4, and added a short description of Deep InfoMax in Section 4.\", \"We have included model and training hyperparameter details in Section 5.1 and Appendix B.\", \"We added a motivation for mixing two different terms in the objective function. Our DIM is primarily designed to improve sentence and span representations. We combine it with MLM which is designed for learning (contextual) word representations, since our overall goal is to create better representations for both the sentence and each word in the sentence. We also note that Deep InfoMax for learning image representations mixes multiple terms in their objective function. We only take one of the terms from the full objective function and mix it with MLM.\", \"Regarding equation I_{DIM}, it is supposed to contain two g_{\\\\omega} and no g_{\\\\psi} as we use one network for encoding both the sentence and n-grams. This is not a typo.\"]}",
"{\"title\": \"response\", \"comment\": \"Thank you for your thoughtful review.\\n\\nWe have updated notations in Equations 1 and 2. The expectations are now taken over random variables (A and B) and the function takes particular values (a and b) of these random variables.\\n\\nRegarding your comment about increasing bias and reducing variance, we did observe that the quality of the InfoWord representations is relatively stable across different runs in our experiments (as evaluated by performance on downstream tasks). Could you please clarify a bit more whether this is what you are asking?\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your thoughtful review. We have updated Equation 1 and the paragraph above so that I(...) is consistently a function of two variables.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper first gives a concise yet precise summary of maximizing one of variational lower bounds of mutual information, InfoNCE, then it provides an alternative view to explain case by case why word embedding Skip-gram, BERT, XLNet work in practice can be viewed by InfoNCE framework, thus we have a good understand for these methods. Moreover it introduces a self-learning method that maximizes the mutual information between a global sentence representation and n-grams in the sentence based on deep InfoMax framework instead. Experiments show that it is better then BERT and BERT-NCE. It's known that InfoNCE increases bias but reduce variance, the same is true for deep InfoMax. Do you observe this in your experiments? If so, please provide.\\n\\nThe paper is well-written and easy to follow. The originality is relative low though, since it is mainly an application of deep InfoMax to language modeling, not inventing a new algorithm and applying to language modeling.\\n\\nIn equations 1 and 2, should a, b be written in capital? Since they represent random variables.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes to make a clear connection between the InfoNCE learning objective (which is a lower bound of the mutual information) and multiple language models like BERT and XLN. Then based on the observation that classical LM can be seen as instances of InfoNCE, they propose a new (InfoWord) model relying on the same principles, but taking inspiration from other models also based on InfoNCE. Mainly, the proposed model differs both in the nature of the a and b variables used in InfoNCE, and also on the fact that it uses negative sampling instead of softmax. Experiments are made on two tasks and compared to a classical BERT model, and on the BERT-NCE model that is a BERT variant proposed by the authors which is somehow in-between BERT and InfoWord. They show that their approach works quite well.\\n\\nI have a very mitigated opinion on the paper. I) First, I really like the idea of trying to unify different models under the same learning principles, and then show that these models can be seen as specific instances of generic principles. But the way it is presented and explained lacks of clarity: for instance in Section 2, some notations are not well defined (e.g what is f?) . Moreover, the way classical models are casted under the InfoNCE principle is badly written: it assumes that readers have a very good knowledge of the models, and the paper does not show well the mapping between the loss function of each model and the InfoNCE criterion. It gives technical details that could (in my opinion) get ignored, and I would clearly prefer to catch the main differences between the different models that being flooded by technical details. So, my suggestion would be to improve the writing of this section to make the message stronger and relevant for a larger audience. II) The Infoword model can be seen as a simple instance of word masking based models, and as an extension of deep infomax for sequences (it would be certainly nice to describe a little bit what Deep InfoMax is to facilitate the reading). Here again, the article moves from technical details (e.g \\\"hidden state of the first token (assumed to be a special start of sentence symbol \\\") without providing formal definitions. Having a first loss function after paragraph 4 could help to understand the principle of this model (before restricting the model to n-grams). Moreover, the equation J_DIM seems to be wrong since it contains g_\\\\omega twice while I think (but maybe I am wrong) that it has also to be defined by g_\\\\psi. J_MLM is also not clear since x_i is never defined (I assume it is x_{i:i}). At last, after unifying multiple models under one common learning objective, the authors propose to mix two different losses which is strange (the effect of the second term is slightly studied in the experimental section) without allowing us to understand why it is important to have this second loss function and why the first one is not sufficient enough. At last, I am pretty sure to not be able to reproduce the model described in the paper (adding a section on that in the supplementary material would help), and many concrete aspects are described too fast (like the way to sample negative pairs). \\n\\nConcerning the experimental section, experiments are convincing and show that the model is able to achieve a performance which is close to classical models. In my opinion, tis section has to be interpreted as a proof that the proposed unified vision is a good way to easily define new and efficient models. \\n\\nTo summarize, the unification under the InfoNCE principle is interesting, but the way the paper is written makes it very difficult to follow, and the description of the proposed model is unclear (making the experiments difficult to reproduce) and lacks of a better discussion about the interest of mixing multiple loss.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper gives a big picture view on training objectives used to obtain static and contextualized word embeddings. This is very handy since classical static word embeddings, such as SGNS and GloVe, have been studied theoretically in a number of works (e.g., Levy and Goldberg, 2014; Arora et al., 2016; Hashimoto et al., 2016; Gittens et al., 2017; Allen and Hospedales, 2019; Assylbekov and Takhanov, 2019), but not much has been done for the modern contextualized embedding models such ELMo and BERT - I personally know only the work of Wang and Cho (2019), and please correct me if I am wrong.\\n\\n\\\"There is nothing as practical as a good theory\\\", and the authors confirm this statement: their theory suggests them to modify the training objective of the masked language modeling in a certain way and this modification proves to benefit the embeddings in general when evaluated on standard tasks.\\n\\nI don't have any major issues to raise. A minor comment is that the mutual information I(., .) being a function of two variables suddenly became a function of a single variable in Eq. (1) and in the text which precedes it.\"}"
]
} |
B1g79grKPr | Goal-Conditioned Video Prediction | [
"Oleh Rybkin",
"Karl Pertsch",
"Frederik Ebert",
"Dinesh Jayaraman",
"Chelsea Finn",
"Sergey Levine"
] | Many processes can be concisely represented as a sequence of events leading from a starting state to an end state. Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe. Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP). Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video. GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp | [
"predictive models",
"video prediction",
"latent variable models"
] | Reject | https://openreview.net/pdf?id=B1g79grKPr | https://openreview.net/forum?id=B1g79grKPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QBNt8I0zAs",
"HygiHzCjiB",
"SygAcbAjiS",
"HkgEsgRooS",
"Skxd4xAojH",
"HklBPG5AFS",
"HyedaO2pKH",
"rJxnKiMaKB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749682,
1573802563192,
1573802389952,
1573802140513,
1573802032152,
1571885660712,
1571829952445,
1571789699863
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2465/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2465/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2465/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2465/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper addresses a video generation setting where both initial and goal state are provided as a basis for long-term prediction. The authors propose two types of models, sequential and hierarchical, and obtain interesting insights into the performance of these two models. Reviewers raised concerns about evaluation metrics, empirical comparisons, and the relationship of the proposed model to prior work.\\n\\nWhile many of the initial concerns have been addressed by the authors, reviewers remain concerned about two issues in particular. First, the proposed model is similar to previous approaches with sequential latent variable models, and it is unclear how such existing models would compare if applied in this setting. Second, there are remaining concerns on whether the model may learn degenerate solutions. I quote from the discussion here, as I am not sure this will be visible to authors [about Figure 12]: \\\"now the two examples with two samples they show have the same door in the middle frame which makes me doubt the method learn[s] anything meaningful in terms of the agent walking through the door but just go to the middle of the screen every time.\\\"\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Response: added Wichers'18 comparison, added FVD/LPIPS evaluation, updated bottleneck results, added clarifications\", \"comment\": \"We thank all reviewers for the helpful comments and suggestions. To address them we made the following changes to the manuscript:\\n(1) We added comparison to the video prediction model of Wichers\\u201918 to Tab 1, showing that both our models, GCP-sequential and GCP-tree, outperform the added baseline on multiple datasets.\\n(2) We added evaluation with the perceptual metrics FVD and LPIPS to Tab 4 in addition to the reported standard video prediction metrics PSNR/SSIM. We show that both proposed goal-conditioned prediction models outperform all baselines on the added metrics across the four tested datasets.\\n(3) We extended the analysis of bottleneck discovery for hierarchical GCP to the Pick&Place dataset and find that the model is able to discover bottleneck states in the top nodes of the predicted hierarchy.\\n(4) We added clarifications to multiple sections of the manuscript addressing questions the reviewers raised. We updated Fig. 1 to better visualize the motivation of the approach. We updated Fig. 8 with the added bottleneck results and added Tab. 4 to the appendix to include the added comparisons and metrics. Further, we added an architecture figure to the appendix.\"}",
"{\"title\": \"Added FVD/LPIPS, clarified motivation\", \"comment\": \"We thank the reviewer for the comments on the motivation and suggesting additional experiments. As suggested, we made the following changes:\\n- In Tab 4, evaluated the compared models on FVD and LPIPS, perceptual visual quality metrics, showing that both goal-conditioned prediction models outperform all baselines across all datasets. \\n- Improved the presentation of our motivation and the introductory figure.\\n\\nWe answer the questions in detail below. Please let us know if this addresses your concern, or if you would like us to discuss this further or add additional evaluations!\\n\\n== 1. Why conditioning on the goal frame is interesting? ==\", \"a\": \"We thank the reviewer for bringing up the important point of motivation. We revised the introductory figure to more clearly reflect our motivation, and we next provide detailed application examples for goal-conditioned prediction (GCP) that expand on the motivation in our introduction. We will integrate these arguments in the final version of the paper.\\n- When controlling an agent, the goal state is often known in practice, and utilizing it for prediction should allow to construct better plans. Building such goal-conditioned agents with model-free techniques is an active area of research [1, 2, 3, 4, 5, 6, 7]. We are hopeful that building better goal-conditioned predictors will enable use of data efficient model-based techniques for such problems.\\n- More generally, in many natural settings the goal of a certain process is known and we want to leverage it for video generation. An example application of GCP is a tool that allows to edit or create a video. To modify a video, a human graphics designer might simply want to change a few seconds of video, and GCP can generate the interpolations to smoothly embed the frames in the video. This problem is distinct from open-ended forward prediction as the video is constrained by the desired final frame.\\n- Finally, we argue in the introduction that unconstrained prediction without a goal is often very challenging, as uncertainty increases dramatically for long time horizons. Conditioning on the goal reduces the uncertainty and makes long-horizon video prediction beyond lengths considered by prior work tractable, as our paper shows.\\n\\n== 2. Where the current conditional models suffer by conditioning on the goal image? ==\\nWe find that for long-horizon goal-conditioned prediction an expressive model that is able to handle the stochasticity in long sequences well is necessary. The two goal-conditioned prediction methods we compare to, DVF and CIGAN, are unable to handle the complexity of such prediction as they are designed for rather short sequences. This motivated our sequential latent variable approach. We note that certain prior work like Denton&Fergus\\u201918, Lee\\u201918, used sequential latent variable models for forward prediction and therefore one version of our proposed method, GCP-sequential, can be considered the goal-conditioned extension of this prior work. We clarified this in the manuscript.\\n\\n[1] Kaelbling, Leslie Pack. \\\"Learning to achieve goals.\\\" IJCAI. 1993.\\n[2] Schaul, Tom, et al. \\\"Universal value function approximators.\\\" International Conference on Machine Learning. 2015.\\n[3] Andrychowicz, Marcin, et al. \\\"Hindsight experience replay.\\\" Advances in Neural Information Processing Systems. 2017.\\n[4] Pong, Vitchyr, et al. \\\"Temporal difference models: Model-free deep rl for model-based control.\\\" ICLR. 2018.\\n[5] Nair, Ashvin V., et al. \\\"Visual reinforcement learning with imagined goals.\\\" Advances in Neural Information Processing Systems. 2018.\\n[6] Fu, Justin, et al. \\\"Variational inverse control with events: A general framework for data-driven reward definition.\\\" Advances in Neural Information Processing Systems. 2018.\\n[7] Warde-Farley, David, et al. \\\"Unsupervised control through non-parametric discriminative rewards.\\\" ICLR. 2019.\"}",
"{\"title\": \"Added FVD/LPIPS evaluation, additional clarifications\", \"comment\": \"We thank the reviewer for the helpful comments and suggestions. We made the following changes to the submission to address the reviewers remarks and answer the posed questions:\\n\\n== FVD+LPIPS metrics ==\\nWe added evaluation results with both metrics for all four datasets to Tab.4 in the appendix. We find that both proposed models for goal-conditioned prediction outperform video interpolation baselines as well as non-goal-conditioned prediction. \\n\\n== Stochastic inverse model ==\\nIndeed it is possible that multiple different action sequences lead from a start state o to a goal state o\\u2019 and prior work addressed this problem by conditioning the inverse model on a stochastic latent variable to explicitly model the uncertainty over the action trajectory [1]. However, in our experiments we did not find to be an issue, because there are typically only 1-3 time steps between the current state and the next predicted target of the inverse model. This is because GCP is able to predict a dense plan for the inverse model to follow. We note that the proposed method is general and can be used with stochastic inverse models.\\n\\n== DVF Off-the-shelf ==\\nWe want to point out that *all methods* were trained from scratch on the respective domain they were tested on, i.e. we re-trained the DVF model that we used to report numbers on H3.6M using the H3.6M training set. This is to allow fair comparison to the GCP models that were trained on the same data. We did not use the off-the-shelf DVF network. We thank the reviewer for pointing out this possible confusion and we added a footnote to the revised manuscript clarifying that all models were trained from scratch.\\n\\nWe again thank the reviewer for the helpful suggestions that improved the quality of the submission. Please let us know if there are any further questions! \\n\\n[1] Learning Latent Plans from Play, Lynch et al., 2019\"}",
"{\"title\": \"Added Wichers'18 Comparison, added additional qualitative visualizations, updated bottleneck results\", \"comment\": \"We thank the reviewer for the helpful comments and suggestions. To address the reviewers remarks we made the following improvements to the paper.\\n\\n== Wichers\\u201918 ==\\nAs suggested by the reviewer, we trained Wichers\\u201918 and report video prediction metrics in Tab. 1. We observe that this method struggles in our experimental setup, likely because deterministic prediction given only one conditioning frame is challenging, especially in stochastic environments. We have made an attempt at extending this baseline to the goal-conditioned setting. However, in our preliminary experiments we were not able to improve the performance over the original version. We also note that we were not able to run Wichers\\u201918 on datasets longer than 100 frames due to computational requirements.\\n\\n== Multiple sampled sequences == \\nWe added a visualization of multiple sequences sampled given the same start-goal frames to the appendix, Figure 10 for the Human 3.6 dataset and Figure 11 for the 2D maze dataset. We note that the original supplementary website contained examples of multiple sampled sequences for every dataset.\\n\\n== Bottleneck discovery ==\\nTo further investigate the bottleneck discovery phenomenon, we performed an experiment on the Pick&Place data, and we observe that the model reliably discovers bottlenecks in those data too. The generations are now shown in Fig. 8.\\n\\nWe again thank the reviewer for the helpful suggestions that improved the quality of the submission. Please let us know if there are any further questions!\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"REFERENCES ARE LISTED AT THE END OF THE REVIEW\", \"summary\": \"This paper proposes a method for video prediction that, given a starting and ending image, is able to generate the frame trajectory in between. They propose two variations of their method: A sequential and a tree based methods. The tree-based method enables efficient frame sampling in a hierarchical way. In experiments, they outperform the used baselines in the task of video prediction. Additionally, they used the learned pixel dynamics model and an inverse dynamics model to plan actions for an agent to navigate from a starting frame to an ending frame.\", \"pros\": [\"Novel latent method for goal conditioned prediction (sequential and hierarchical)\", \"Really cool experiments on navigation using the predicted frames\", \"Outperforms used baselines\", \"Weaknesses / comments:\", \"Missing baseline:\", \"The Human 3.6M experiments are missing the baseline from Wichers et al., 2018. I would be good to compare against them for better assessment of the predicted videos.\", \"Bottleneck discovery experiments (Figure 8):\", \"The visualizations shown in Figure 8 are very interesting, however, I would like to see if the model is able to generate multiple trajectories from the same frame. It looks like the starting frames (left) are not the same.\"], \"conclusion\": \"This paper proposes a novel latent variable method for goal oriented video prediction which is then used to enable an agent to go from point A to point B. I feel this paper brings nice insights useful for the model based reinforcement learning literature where the end goal can be guided by an image rather than predefined rewards. It would be good if the authors can include the suggested video prediction baseline from Wichers et al., 2018 in their quantitative comparisons.\", \"references\": \"Nevan Wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee. Hierarchical Long-term Video Prediction without Supervision. In ICML, 2018\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The following work proposes a model for long-range video interpolation -- specifically targetting cases where the intermediate content trajectories may be highly non-linear. This is referred to as goal-conditioned in the paper. They present an autoregressive sequential model, as well as a hierarchical model -- each based on a probabilistic framework. Finally, they demonstrate an application in imitation learning by introducing an additional model that maps pairs of observations (frames) to a distribution over actions that predicts how likely each action will map the first observation to the second. Their imitation learning method is able to successfully solve mazes, given just the start and goal observations.\", \"strengths\": \"-The extension to visual planning/imitation learning was very interesting\\n-Explores differences between sequential and hierarchical prediction models\\n\\nWeaknesses/questions/suggestions:\\n-In addition to SSIM and PSNR, one might also want to consider the FVD and LPIPS, both which should correlate better with human perception.\\n-How does the inverse model in section $ p(a | o,o')$ account for the case in which multiple actions may eventually result in o -> o', given than o' is sufficiently far from o? Does the random controller need to be implemented in a specific way to handle this?\\n-I think a fairly important unstated limitation is that latent-variable based methods tend not to generalize well outside of their trained domain. In table 1, I assume DVF was taken off-the-shelf, but all other methods were trained specifically on H3.6M?\", \"lpips\": \"https://github.com/richzhang/PerceptualSimilarity\", \"fvd\": \"https://github.com/google-research/google-research/tree/master/frechet_video_distance\\n\\n\\nOverall, I think the results seem pretty promising -- most notably the imitation learning results. I hope that the authors can address some of my concerns stated above.\\n\\n\\n** Post Rebuttal:\\nThe authors have adequately addressed my concerns regarding clarity and metrics. The current draft also better motivates the task of long-range interpolation vs short range interpolation. I maintain my original rating.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper reformulates video prediction problem by conditioning the prediction on the start and end (goal) frame. This essentially changes the problem from extrapolation to interpolation which results in higher quality predictions.\\n\\nThe motivation behind the paper is not clear. First of all, the previous work in video predicted is typically formulated as \\\"conditioned frame prediction\\\" where the prediction of the next frame is conditioned on \\\"a set of context frames\\\" and there is no reason why this set cannot contain the goal frame. Their implementation, however, is motivated by their application and therefore these models are usually only conditioned on the start frames. Unfortunately, besides the reverse planning in imitation learning, the authors did not provide a suite of applications where such a model can be useful. Hence, I think the authors should answer these two questions to clear up the motivation:\\n1. Why conditioning on the goal frame is interesting? It specifically helps to provide more concrete details than getting from Oakland to San Fransico.\\n2. Where the current conditional models suffer by conditioning on the goal image?\\n\\nMore experiments are required to support the claims of the paper as well. \\nGiven my point regarding context frames, a more fair experiment would be to compare the proposed method with them when they are conditioned on the goal frame as well. This explicitly has been avoided in 5.1.\\n The used metrics are not a good evaluation metric for frame prediction as they both do not give us an objective evaluation in the sense of the semantic quality of predicted frames. The authors should present additional quantitative evaluation to show that the predicted frames contain useful semantic information. FVD and Inception score come to my mind as good candidates. \\n\\nOn quality of writing, the paper is well written but it can use a figure that demonstrates proposed architecture. The authors provided the code which is always a plus. \\n\\nIn conclusion, I believe the impact of the paper, in the current form, is marginal at best and for sure does not meet the requirements for a prestigious conference such as ICLR. However, a more clear motivation, a concrete set of goals and claims, as well as more comprehensive experiments, can push the quality above the bar.\"}"
]
} |
HyeG9lHYwH | Compression without Quantization | [
"Gergely Flamich",
"Marton Havasi",
"José Miguel Hernández-Lobato"
] | Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder. This process, due to the quantization step, is inherently non-differentiable so these algorithms must rely on approximate methods to train the encoder and decoder end-to-end. In this paper, we present an innovative framework for lossy image compression which is able to circumvent the quantization step by relying on a non-deterministic compression codec. The decoder maps the input image to a distribution in continuous space from which a sample can be encoded with expected code length being the relative entropy to the encoding distribution, i.e. it is bits-back efficient. The result is a principled, end-to-end differentiable compression framework that can be straight-forwardly trained using standard gradient-based optimizers. To showcase the efficiency of our method, we apply it to lossy image compression by training Probabilistic Ladder Networks (PLNs) on the CLIC 2018 dataset and show that their rate-distortion curves on the Kodak dataset are competitive with the state-of-the-art on low bitrates. | [
"Image Compression",
"Bits-back efficient",
"Quantization"
] | Reject | https://openreview.net/pdf?id=HyeG9lHYwH | https://openreview.net/forum?id=HyeG9lHYwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BL0tboSR-F",
"HJlJd4gnsS",
"Bygbj0SuiH",
"rkxeGAHOiB",
"B1gfOpBdiS",
"SyxKW6SuoB",
"B1x_DUuhcr",
"SJgHF8Qt5B",
"HkgqED8k5H",
"SkxaSBcaFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749649,
1573811302931,
1573572249294,
1573572104132,
1573571945720,
1573571841462,
1572796000415,
1572578941299,
1571936049778,
1571820869217
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2463/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2463/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2463/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2463/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2463/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2463/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method for lossy image compression. Based on the encoder-decoder framework, it replaces the discrete codes by continuous ones, so that the learning can be performed in an end-to-end way. The idea is interesting, but the motivation is based on a quantization \\\"problem\\\" that the authors show no evidence the competing method is actually suffering from. It is thus unclear how much does quantization in existing methods impact performance, and how much will fixing this benefit the overall system. Also, the authors may add some discussions on whether the proposed sampling of z_{c^\\\\star} is indeed also a form of quantization.\\n\\nExperimental results are not convincing. The proposed method is only compared with one method. While it works only slightly worse at low bit-rate region, the gap becomes larger in higher bit rate regions. Another major concern is that the encoding time is significantly longer. Ablation study is also needed. Finally, the writing can be improved.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for the rebuttal.\\n\\nWe agree that even though the competing method [1] uses quantization, it does not seem to suffer from it. You then argue that the other reason for removing quantization is that it circumvents the restriction for uniform distributions of the posterior, and that therefore your method could potentially surpass the results of [1]. If this is the motivation for your work, then you should demonstrate that by using flexible posteriors in your method, you actually improve upon the results of [1] or come close to it. This is currently not demonstrated in the paper, and I therefore retain my score.\"}",
"{\"title\": \"Responses to Reviewer #3\", \"comment\": \"We thank the reviewer for their detailed comments on our work.\\n\\nWe agree with the reviewer that in our paper we show no evidence that competing methods suffer from quantization, in particular, we do not believe that they suffer from it.\\n\\nOur work is simply focussed on an alternative, novel approach to lossy compression, that allows a much wider class of algorithms to be used as the encoder and decoder. Concretely, previous approaches required specific assumptions about the compression pipeline (e.g. what kind of quantization is performed. In [2] it is assumed to be rounding, and hence its derivative is replaced by a continuous relaxation that the authors had to choose) or the model (e.g. [1] relies on the latent posteriors to be uniform distributions such that their terms cancel in the ELBO). In contrast, our method works for any generative model where an approximate posterior for the latents is available. This means that an arbitrary valid VAE could be used in our method, with complete freedom of choice for both the latent posteriors and priors.\\n\\nWe, therefore, believe that the removal of restrictions on the latent space's distributions is a strong motivation, and by using a more flexible family for our VAE, the results of [1] could be surpassed.\\n\\nIndeed, the model performance degrades more at higher rates than the performance of [1]. A simple explanation might be that of model capacity: our model has been trained with much fewer latent filters than what [1] used (e.g. we used 24 latent filters as opposed to the 128 and 196 that Balle used for lower and higher rates, respectively).\\n\\nWe agree that a better empirical study could be performed. The main difficulty we found was assessing how different training sets might impact the model performance, and hence decided that the fairest comparison would be if all models were trained on the same dataset. Sadly, this limits the comparisons to works where the code to achieve the reported results is freely available, which was only true in the case of [1].\\n\\nWe opted to report single image statistics only as we believe that aggregate statistics are not necessarily meaningful [3], though we agree that it might provide a more robust idea of model performance, and hence we will report aggregate results in the next draft.\\n\\nWe thank the reviewer for additional feedback to improve the quality of our writing.\\n\\n[1] Johannes Ball\\u00e9, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. ICLR 2018.\\n[2] Lucas Theis et al. Lossy Image Compression with Compressive Autoencoders. ICLR 2017\\n[3] Johannes Ball\\u00b4e, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. ICLR 2016.\"}",
"{\"title\": \"Responses to Reviewer #1\", \"comment\": \"We thank the reviewer for their comments on our work. We appreciate the thoughtful feedback.\\n\\nWe agree that the discussion on the limitations of the method is perhaps a bit too limited and could be expanded so that the current challenges of applying our method are clearer. Indeed, even the theoretical upper bound falls off compared to Balle's results. Note that we use Gaussian priors on both levels of our VAE, in particular, standard Gaussians on the second stochastic level, that are not adjusted for the dataset. This is in contrast to Balle's approach, where they utilize a very flexible non-parametric prior for each dimension for their second stochastic level, and separately from the model, they also optimize it for the dataset. We conjecture that the reason this does not adversely affect their test performance is due to the large dataset used for training (~ 1 million high-resolution images). \\n\\nThe extent to which quantization (or the various relaxations of quantization for training) is an interesting question. Figure 4 in [1] shows a nice comparison of the actual quantization error versus the continuous estimate using uniform dither, as well as the approximation quality of the differential entropy used in training with the actual compression rate. It shows that both relaxations are very accurate, the main limitation of their method is that they are constrained to VAEs where the posteriors are always shifted uniform distributions, whereas our method allows the use of arbitrary posteriors.\\n\\nWe do not necessarily agree that the importance sampling algorithm is a form of quantization. Quantization is (usually) a rounding operation, used to assign non-zero mass to the symbols we wish to compress. The importance sampling procedure, on the other hand, serving as part of the relative entropy coding scheme, is lossless.\\n\\nDrawing different samples from the posterior distribution will naturally cause some variation in the output of the (deterministic) decoder, but VAEs have been demonstrated to be robust to noise on their stochastic level [2], and thus if the difference is not beyond floating-point precision, it certainly is beyond human perceptibility.\\n\\n\\n[1] Johannes Ball\\u00b4e, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression.\\nICLR 2016.\\n\\n[2] Bin Dai, Yu Wang, John Aston, Gang Hua, & David Wipf. Connections with robust PCA and the role of emergent sparsity in variational autoencoder models. JMLR 2018\"}",
"{\"title\": \"Responses to Reviewer #4\", \"comment\": \"Thank you for your feedback on our paper.\\n\\nOur contribution is the coding scheme, and how it can be used in conjunction with generative models. Thus, our main focus was not to find a good VAE architecture, hence we adopted the architecture of [1].\\n \\nWe understand that the choice of architecture in our setting is crucial to good performance, and we chose it precisely to be able to compare the performance of the appropriate trade-offs we made compared to the trade-offs of [1]. Concretely, note that we have usual Gaussian latent space on both levels of our network, which we use for relative entropy coding, whereas [1] have uniform posteriors on both levels, a uniform-Gaussian convolution as the first-level prior and a non-parametric prior on the second, \\nthat is optimized for the data separately.\\n\\nPerhaps the most obvious comparison that was left out would be with [2]. The issue is with comparability, namely that [2] trained on a custom dataset and their code was not available so that it could be retrained on the same dataset we trained our model on.\\n\\n\\n[1] Johannes Ball\\u00b4e et al. Variational image compression with a scale hyperprior. ICLR 2018.\\n\\n[2] Lucas Theis et al. Lossy Image Compression with Compressive Autoencoders. ICLR 2017\"}",
"{\"title\": \"Responses to Reviewer #5\", \"comment\": \"We thank the reviewer for providing feedback on our paper, we address each point below:\\n\\n1. The reviewer is right, continuous latent spaces and using the beta-ELBO as training objective have been studied extensively. The novelty of our work is rather in the compression part, where we show how continuous latent distributions could be used for lossless compression of latent variables, as part of a lossy compression pipeline.\\n\\nOur work is in contrast with all previous neural compression methods, which all used probability masses and entropy coding.\\n\\nSecond, we also adapt the REC algorithm developed by [1] for BNN compression, to compress the latents of generative models. These adaptations are necessary since i) the structure BNN weights' posterior distribution will differ from the structure of a datapoint's posterior in a generative model, and ii) [1] make use of successive retraining during the compression process, which is far too expensive for a compression codec.\\n\\n2. In the work of [2] only the possibility of such a compression method is demonstrated. In our work, we provide a realization of this using our importance sampler.\\n \\n3. We agree that more experiments could be performed, e.g. with [3], and across multiple media, e.g. audio or video compression as well. The principal reason for the lack of more experiments was the issue of comparability: there does not seem to be a clear consensus for what training set to use, hence for best comparison we sought to report methods where we could train on the same data as we used for our model and found that of the relevant sources only [4] had their code publicly available. \\n\\nAs mentioned earlier [1] have developed their method for BNN compression, but it does not extend to the lossy data compression setting.\\n\\nComparison to PNG is not quite relevant in our setting, as it is a lossless image compression algorithm and we work in the lossy setting. \\n\\nWe compare the performance of our method with JPEG in Figures 3, 7, 8 and 9 on various images from the Kodak dataset.\\n\\n4. We show PSNR comparisons in the above-mentioned figures, as well as MS-SSIM, plotted against the compression rate measured in bits per pixel.\\n \\n5. Our architecture is the one used in [4], our contribution is the compression of the latent variables.\\n\\n6-7. We would appreciate it if you were able to provide concrete examples where changes are needed.\\n\\n\\n[1] Marton Havasi, Robert Peharz, and Jos\\u00b4e Miguel Hern\\u00b4andez-Lobato. Minimal random code learning:\\nGetting bits back from compressed model parameters. NIPS workshop on Compact Deep Neural\\nNetworks with industrial application, 2018.\\n\\n[2] Geoffrey Hinton and Drew Van Camp. Keeping neural networks simple by minimizing the description\\nlength of the weights. In Proc. of the 6th Ann. ACM Conf. on Computational Learning\\nTheory. Citeseer, 1993.\\n\\n[3] Lucas Theis et al. Lossy Image Compression with Compressive Autoencoders. ICLR 2017\\n\\n[4] Johannes Ball\\u00b4e, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational\\nimage compression with a scale hyperprior. In International Conference on Learning Representations,\\n2018.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"The paper proposes a method for lossy image compression. Based on the encoder-decoder framework, it replaces the discrete codes by continuous ones, so that the learning can be performed in an end-to-end way.\\n\\nOverall, I think the current version should not be accepted. \\n\\nThe detailed comments are as follows.\\n\\n1. The novelty is not clear. Using continuous latent space has been analyzed for years. So does the negative beta-ELBO . Section 4 starts with importance sampling, then some implementation compromises, which makes the efficacy of resultant method unclear.\\n\\n2. You mention that REC achieves bits-back efficiency according to (Hinton 1993). However, how is c* selected in their paper? Now that you use importance sampling, does this still hold?\\n\\n3. The experiments are very insufficient. It only compares with one method, which is not enough. From the main content, the proposed method is improved upon (Havasi 2018). But it is not compared. Let alone the other methods mentioned in the related works. PNG and JPEG should also be compared. \\n\\n4. For the reconstruction, it is better to measure the quality quantitively by PSNR, and mark the Bits/Pixel. \\n\\n5. Ablation study is also needed. PLN without your contributed part should be evaluated alone. So far, I cannot tell how your method works and which part works. \\n\\n6. There are many broken sentences and typos.\\n\\n7. Some statements such as the application parts should be properly cited.\\n\\nLast, the paper may be entitled as 'image compression without quantization' instead.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper studied the image compression problem. Specially, the authors proposed to use neural networks to act as encoder / decoder. The training consists of minimizing the distortion of reconstruction and the difference between approximated posterior and true distribution. The output of encoder is sampled by the proposed relative entropy coding, which extended a previous method by introducing adaptive grouping for acceleration.\\n\\nIn summary, this paper gets rid of commonly used quantization techniques in image compression by using an approximate importance sampler which produces the encoding of images in a non-deterministic manner. With the construction of parameterized encoder / decoder, end-to-end training is conducted by popular gradient descent.\", \"here_is_a_question\": \"In experiments, the authors mentioned that the architecture is borrowed from another work. My question is how neural network architecture affects the performance? In other words, how to ensure that the performance is not obtained from the power of the backbone but from the proposed method itself. Are there any possible experiments which can be conducted to show the effectiveness by using different architectures?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a new image compression method that does not require quantizing the encoded bits in an auto-encoding style image compression model. The method builds on a VAE with image x, code z, and posterior q(z | x). Instead of directly encoding z, the proposed method samples z_{c^\\\\star} from posterior and store c^\\\\star as the compressed representation. A decoder then reconstruct an approximation of x from z_{c^\\\\star}. The authors show that this framework is bits-back efficient and draw connections to prior theoretical results. Experiments were conducted on the Kodak dataset based on the model of Balle et al. The proposed method works only slightly worse than Balle et al. at low bit-rate region, but the gap becomes larger in higher bit rate regions.\\n\\nThe method is technically sound and the paper is clearly written. My main concerns fall in practical aspects. In Figure 3, for 3 out of 4 images, the theoretical upper bound of the proposed method still do not outperform Balle et al. This suggests limitations of the proposed method. Discussion on the limitations of the method is limited. The results also beg the question: How much does quantization in existing methods impact performance, and how much will fixing this benefit the overall system. Finally, in my opinions, in some sense the sampling of z_{c^\\\\star} is also a form of quantization. Does drawing different samples from posterior leads to different reconstructed images? If it does, doesn't it also suffer from similar limitations as existing \\\"quantization\\\" methods? \\n\\nOverall, I think the direction of the proposed method has good potential, but it also leaves important questions unanswered. I think this paper will benefit from additional revisions.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThis paper aims to circumvent the quantization step and associated gradient approximations for compression algorithms that make use of entropy coding for compression. Entropy coding requires a probability mass function over discrete symbols. As an alternative approach the authors adapt the MIRACLE algorithm by Havasi et al. (2018), which was originally used to compress Bayesian neural networks, to work for lossy source compression with probability density functions over continuous latent variables.\\nThe algorithm is based on taking several importance samples from the prior p(z) with the encoding distribution q(z|x) as the target distribution. The number of samples is equal to the exponent of the KL between the encoding and prior distribution, which can quickly grow to an uncontrollably large number. The index of the sample with the maximum importance weight is used as the code for the original image. If both sender and receiver have access to the same random sampler, the receiver can reproduce the sample by drawing a number of samples equal to this index, and decoding the last sample. Similar in spirit to Havasi et al, the authors take several steps to ensure that the KL divergence (and therefore the number of samples) does not become prohibitively large. The compression performance is evaluated by training the proposed model and the competing neural network-based method [1] on the Clic dataset, and evaluating it on a subset of the images of the Kodak dataset. JPEG is also used as a baseline. The authors show results achievable if coding using the relative entropy was perfect (denoted with \\u2018theoretical\\u2019), and the practically achieved compression performance (\\u2018actual\\u2019).\", \"decision\": \"\", \"weak_reject\": \"although the idea of circumventing the quantization step required by the use of entropy coders is certainly valid and interesting, the results in the paper show that the resulting compression performance is worse than the competing method that does quantize. Moreover, the encoding time is significantly longer than this same baseline.\", \"supporting_arguments_for_decision\": \"Although the motivation for circumventing the quantization step seems plausible, the authors show no evidence that the competing method [1], which does perform a post-training quantization step, actually suffers from it. The authors even state on page 8 that \\u201c Most notably though, they only used the continuous relaxation during the training of the model, thereafter switching back to quantization and entropy coding, which, they show does not impact the predicted performance, and hence confirming that their relaxation during training is reasonable.\\u201c If post-training quantization is reasonable, then this overthrows the entire motivation. More importantly, the \\u201ctheoretically\\u201d achievable results of the proposed method in seem only competitive in the low-bit rate regime and worse in the higher bit rate regimes. Even more, the \\u201cactual\\u201d practically achieved compression results are worse than [1] and also considerably worse than the \\u201ctheoretically\\u201d achievable compression results. The authors do provide a reason for why the \\u201ctheoretical\\u201d and \\u201cactual\\u201d results are so far apart, but are unfortunately not able to overcome this issue.\\nIn the conclusion the authors honestly admit that the runtime of their method is much slower than the competitors (1-5 min for proposed method vs ~0.5 s for [1] for encoding times). I appreciate that the authors mention this. The authors then state \\u201cimproving the rate factor and the run-time does not seem too difficult a task, but since the focus of our work was to demonstrate the efficiency of relative entropy coding, it is left for future work.\\u201d I do not think this paper demonstrates the efficiency of relative entropy coding, the results simply don\\u2019t support this claim, and I therefore think that stating that the issues seem not too difficult to overcome is insufficiently convincing. \\n\\nThe quality of the empirical study can be improved. Another neural network-based compression baseline would make the empirical evaluation of the proposed method more insightful. Now we only see that the result is worse than [1], but it would be good to know how it compares to other baselines such as [2]. Furthermore, the paper does not show compression results aggregated over the entire Kodak dataset, but rather picks 2 images for the main part of the paper, and shows 3 in the appendix. Showing aggregate results gives a more robust estimate of the performance. Individual image results can just be put in the appendix. \\n \\n\\nAdditional feedback to improve paper (not part of decision assessment):\\n- In section 5 on the dependency structure of latents in the ladder VAE: is it really necessary to indicate the dependency structure with \\u201ctopological structure\\u201d? Seems unnecessary to me as dependency structure is a clear enough description already without making it sound overly complicated.\\n- Page 9: \\u201cFurther, our architecture also only uses convolutions and deconvolutions as non-linearities.\\u201d Convolutions and deconvolutions are not non-linearities.\\n- Fig 2: I\\u2019m not sure if this figure is relevant enough for such a prominent placement in the paper. It doesn\\u2019t discuss anything relevant to the contributions claimed in this paper. \\n- I can\\u2019t find a definition of O in line 3 of \\u201cprocedure\\u201d in algorithm 2 and in the return statement.\\n\\n\\n[1] Johannes Ball\\u00e9, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. ICLR 2018.\\n[2] Lucas Theis et al. Lossy Image Compression with Compressive Autoencoders. ICLR 2017\"}"
]
} |
HyleclHKvS | A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed | [
"Qingru Zhang",
"Yuhuai Wu",
"Fartash Faghri",
"Tianzong Zhang",
"Jimmy Ba"
] | Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after a few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime. | [
"variance reduction",
"non-asymptotic analysis",
"trade-off",
"computational cost",
"convergence speed"
] | Reject | https://openreview.net/pdf?id=HyleclHKvS | https://openreview.net/forum?id=HyleclHKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"m49Ij4iVU",
"Hkl2Dn5ssB",
"rJeMvjqiir",
"Skx-ZF9ijr",
"ByggwL9osH",
"r1x2--U6FB",
"SyxZrW16Yr",
"B1ghYumwKH",
"SJxQMScXKr",
"H1lyVZU_dB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"comment",
"official_review"
],
"note_created": [
1576798749621,
1573788772447,
1573788506496,
1573787897460,
1573787223841,
1571803395622,
1571774776747,
1571399811870,
1571165451488,
1570427174681
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2459/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2459/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2459/AnonReviewer2"
],
[
"~Sebastian_U_Stich1"
],
[
"ICLR.cc/2020/Conference/Paper2459/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Two reviewers as well as the AC are confused by the paper\\u2014perhaps because the readability of it should be improved? It is clear that the page limitation of conferences are problematic, with 7 pages of appendix (not part of the review) the authors may consider another venue to publish. In its current form, the usefulness for the ICLR community seems limited.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Large batch SGD is not the same as SVRG. Please consider updating the score.\", \"comment\": \"Thanks for your constructive feedback. We have revised our paper based on your suggestions.\", \"q\": \"I wonder if the phenomena still holds with much larger datasets.\\n>> We will try the same experiments with the larger datasets like ImageNet. But at least 32 groups are needed to generate one line in our plots. With larger-scale experiments more time-consuming, we cannot guarantee all of them will be done timely before the end of rebuttal.\\n[1]. Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. In ICML, volume 80 of Proceedings of Machine Learning Research, pp. 3331\\u20133340. PMLR, 2018.\\n[2]. Mark Schmidt and Nicolas Le Roux. Fast convergence of stochastic gradient descent under a strong growth condition. arXiv preprint arXiv:1308.6370, 2013.\"}",
"{\"title\": \"Simplified linear models can still provide insightful intuitions to neural networks, shown also by our empirical results.\", \"comment\": \"Thanks for your constructive feedback.\", \"q\": \"In a lot of cases it doesn't really seem like the experiment is done running, e.g. fig 2 (b), 4 (b), and it's hard to make sweeping statements about the final loss without running to that point.\\n>> Thanks for your suggestions. We extended the plots in Fig 4 (b) and updated the plot in our new version. The number of epoch is changed from 96 to 192 but the progress is limited with SVRG still not attaining global minimum. As for the numerical experiment of Fig 2 (b), its y-axis is log scaled. We think it already runs to the optimal point when the loss attains $10^{-10} $.\"}",
"{\"title\": \"Theoretical contents are not known. Error bounds is not the same as exact expected loss at step t, which is necessary in our analysis. Please consider updating the score.\", \"comment\": \"Thanks for your constructive feedback. Here are some responses to the concerns you raised.\\n\\n1. The connection to neural networks:\\n\\nIn section 1.1, we mentioned neural tangent kernel (NTK) to connect our theoretical analysis with/withour label noise to the experiments in underparametrized/overparametrized regime. In fact, for over-parametrized neural networks, there are a bunch of work that draws connections between neural networks and linear regression model [2-5]. To be precise, when the number of parameters $p$ greatly exceeds the number of data $n$, it can be shown by [5] that the parameter $\\\\theta$ moves only a small amount w.r.t. some initialization ${\\\\theta}_0$, and hence it is possible to linearize the model around $\\\\theta_0$, i.e. $f(x;{\\\\theta}) = \\\\nabla_{{\\\\theta}} f(x; {\\\\theta}_0)^\\\\top {\\\\theta}$, for ${\\\\theta} = {\\\\theta}-{\\\\theta}_0$ the distance parameters moved during training. This is exactly the linear regression model, and this notion has already been adopted in [4, Section 1]. \\n\\nThe main difference between over- and under-parametrized neural nets is the ability for the function space to cover the target function, i.e. the so-called \\u201cinterpolation regime\\u201d. For under-parametrized neural nets, this model is related to the linear regression model with label noise. We do not directly analyze the behavior of SGD and SVRG on neural nets which is generally hard (esp. for finite horizon analysis), but such analysis on linear model could possibly give rise to the intuitions of neural nets behind.\\n\\n2. The theoretical content novelty.\\n\\nWe would like to emphasize that our paper derives the exact expected loss at t-step for the noisy least square model for both SGD and SVRG methods, instead of an non-asymptotic upper bound as given in [4]. Deriving the exact loss is necessary because we need to compare the two methods\\u2019s performance at each time step and upper bound cannot provide any valid comparisons. The dynamics we derived (Eq.5 and Lemma 2) are then used to run the numerical simulations, and compare the two methods in Section 4.1. \\nIn addition, [6] derives non-asymtotic upper bounds for implicit SGD methods, which does not contain SVRG. Our main contribution lies in the analysis of SVRG to explain its dilemma when applied in deep learning tasks. Hence we believe our theoretical results for the second moment of SVRG are novel. \\n\\n3. Related work.\\nWe have revised the related work and change them properly based on your suggestions.\\n\\n[1] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in Neural Information Processing Systems, 31, 2018.\\n[2] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. International Conference on Machine Learning. 2019.\\n[3] Du, Simon, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient Descent Finds Global Minima of Deep Neural Networks. International Conference on Machine Learning. 2019.\\n[4] Hastie Trevor, Andrea Montanari, Saharon Rosset, and Ryan J. Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560 2019.\\n[5] Chizat, Lenaic, Edouard Oyallon, and Francis Bach. On Lazy Training in Differentiable Programming. arXiv preprint arXiv: 1812.07956 2018.\\n[6] Toulis and Airoldi, \\\"Asymptotic and finite-sample properties of estimators based on stochastic gradients\\\" (2017)\"}",
"{\"title\": \"New version updated. Common concerns among reviewers are addressed below.\", \"comment\": \"Thanks for constructive feedback from all reviewers. We have found and fixed a mistake in Appendix B. And all simulations were rerun based on new theorems, and our previous conclusions still hold. In the new theorems, the coefficient matrix in Eq.6 of Theorem 2 changes from a diagonal matrix to a positive definite symmetric matrix. In this case, the second moment of each parameter no longer evolve independently. Besides, we have revised all typos pointed out by reviewer 1 and reviewer 2. Below are some crucial points we need to clarify about our paper.\\n\\n1. Exact loss at step $t$ vs. error bound: \\nWe would like to emphasize that our paper derives the exact expected loss at t-step for the noisy least square model for both SGD and SVRG methods, instead of an non-asymptotic upper bound as given in citations referred by reviewer 3 [1]. Deriving the exact loss is necessary because we need to compare the two methods\\u2019 performances at each time step and upper bound cannot provide any valid comparisons.\\n\\n2. Connection of overparametrized/underparametrized to without/with label noise: \\nIn section 1.1, we mentioned neural tangent kernel (NTK) to connect our theoretical analysis with/withour label noise to the experiments in underparametrized/overparametrized regime. In fact, for over-parametrized neural networks, there are a bunch of work that draws connections between neural networks and linear regression model [2-5]. To be precise, when the number of parameters $p$ greatly exceeds the number of data $n$, it can be shown by [5] that the parameter $\\\\theta$ moves only a small amount w.r.t. some initialization ${\\\\theta}_0$, and hence it is possible to linearize the model around $\\\\theta_0$, i.e. $f(x;{\\\\theta}) = \\\\nabla_{{\\\\theta}} f(x; {\\\\theta}_0)^\\\\top {\\\\theta}$, for ${\\\\theta} = {\\\\theta}-{\\\\theta}_0$ the distance parameters moved during training. This is exactly the linear regression model, and this notion has already been adopted in [4, Section 1]. \\n\\nThe main difference between over- and under-parametrized neural nets is the ability for the function space to cover the target function, i.e. the so-called \\u201cinterpolation regime\\u201d. For under-parametrized neural nets, this model is related to the linear regression model with label noise. We do not directly analyze the behavior of SGD and SVRG on neural nets which is generally hard (esp. for finite horizon analysis), but such analysis on linear model could possibly give rise to the intuitions of neural nets behind.\\n\\n3. Convergence rate of SVRG vs. SGD: \\nOur paper presents results that may seem to contradict the well-known fact that SVRG converges faster than SGD, and this creates some confusion to the reviewers. We would like to clarify this paradox. First of all, we would like to emphasize that we derived the exact expected loss at t-step for both algorithms under the noisy linear square model, instead of any asymptotic convergence rate results. We then compared the two algorithms numerically by running the dynamics with constant learning rates (Section 4).\\nIn the traditional analysis, one needs a decaying learning rate schedule for SGD so that it can converge. SVRG on the other hand does not require a decay learning rate hence achieving a faster convergence rate. In contrast to the standard analysis, we used fixed learning rate schedule for both algorithms. In the case with label noise (under-parameterized), we observed that SGD achieved lower loss that SVRG for the first part of the training, but SGD converged to a higher loss than SVRG, as expected. In the case without label noise (interpolation regime), our experimental results agreed with what\\u2019s known in prior work [6,7]: SGD and SVRG both achieve linear convergence. \\n\\n[1] Toulis and Airoldi. Asymptotic and finite-sample properties of estimators based on stochastic gradients. (2017)\\n[2] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in Neural Information Processing Systems, 31, 2018.\\n[3] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. International Conference on Machine Learning. 2019.\\n[4] Hastie Trevor, Andrea Montanari, Saharon Rosset, and Ryan J. Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560 2019.\\n[5] Chizat, Lenaic, Edouard Oyallon, and Francis Bach. On Lazy Training in Differentiable Programming. arXiv preprint arXiv: 1812.07956 2018.\\n[6]. Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. In ICML, volume 80 of Proceedings of Machine Learning Research, pp. 3331\\u20133340. PMLR, 2018.\\n[7]. Mark Schmidt and Nicolas Le Roux. Fast convergence of stochastic gradient descent under a strong growth condition. arXiv preprint arXiv:1308.6370, 2013.\"}",
"{\"comment\": \"Thanks for your interest in our paper. We will add k-SVRG to the related work.\\n\\nFor experiments with neural nets (deep learning), in section 4, we compared SVRG and SGD in both the underparameterized and the overparameterized regimes, using MLP for MNIST and CNN for CIFAR-10. In the overparameterized setting, the MLP for MNIST contains two hidden layers, each layer having 1024 neurons; the CNN for CIFAR-10 has one 64-channel convolutional layer, one 128-channel convolutional layer followed by one 3200 to 1000 fully connected layer and one 1000 to 10 fully connected layer. As for the underparameterized models, you can check section 4.2 for more details about our network architecture. The results from both regimes match our theoretical analysis. Underparameterized neural networks match our theoretical results with label noise and overparameterized models\\u2019 performance match the theoretical results without label noise. \\n\\nWith regards to SCSG, the method replaces the full-batch gradient with the gradient of a medium-size batch as a cheaper way to reduce the variance. Similarly, our analysis also assumes that SVRG uses a batch gradient to reduce the variance, instead of the full-batch gradient. In our numerical simulation and empirical studies, we tried to find the optimal batch size and snapshot interval for SVRG under a fixed computational budget, compared to SGD. Hence we believe the results we obtained should also apply to SCSG.\", \"title\": \"Comparison in deep learning settings.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims to compare SGD and SVRG in deep learning, motivated by recent results that SGD performs better than SVRG, despite the latter's theoretical optimality.\\nThe idea in the paper is to study this problem through linear regression by establishing \\nsome asymptotic bounds for both SGD and SVRG. By looking into the terms of these bounds one can initiate a comparative study. A mixed picture is presented in the experiments which roughly agrees with some of the authors' claims.\\n\\nThere are, however, several important issues with the paper that require a major revision:\\n\\n1) The connection between neural networks is never really established. There is also an obscure relationship between 'overparameterized/underparaterized' neural networks and 'without/with label noise'. While this relationship is important to switch our attention to a much simpler problem, the specifics are not explicated.\\n\\n2) The theoretical content is not novel. All results on second moments (and more) are well known. \\nFor example, [4] have both non asymptotic analysis, and a characterization of sampling variance for general SGD --- the assumptions of normal X with diagonal variance are very restricting (and unnecessary).\\nAdditionally, the assumption of \\\\theta_\\\\star = 0 is not exactly WLOG.\\n\\n3) The related work is not well cited. Examples: \\n\\n 3a) \\\"Instead of using the full gradients, the variants of SGD...\\\"\", \"the_citations_for_sgd_here_are_a_bit_confusing\": \"Robbins and Monro never talked about SGD; Duchi et al is not about standard SGD, and so on. Better references are [1, 2].\\n\\n 3b) \\\"The sampling variance and the slow convergence of SGD have been studied extensively\\nin the past (Robbins & Monro, 1951; Polyak & Juditsky, 1992; Bottou, 2010).\\\"\\nNone of this paper studies sampling variance of SGD. RM (1951) only study convergence of stochastic approximation. PJ (1992) is about iterate averaging. Bottou (2010) is also not about sampling variance, and only covers convergence on a high-level. \\nLook at [4] for the sampling variance of SGD procedures; also [5, 6].\\n\\n3c) \\\"Our main analysis tool is very closely related to recent\\nwork studying the dynamics of gradient-based stochastic methods.\\\"\\nMisses important prior work in stochastic approximation dynamics.\\nLook at [7].\\n\\n\\n[1] Zhang, \\\"Solving large scale linear prediction problems using gradient descent algorithms\\\" (2004)\\n[2] Bottou, \\\"Large-Scale Machine Learning with Stochastic Gradient Descent\\\" (2010)\\n[3] Amari, \\\"Natural gradient works efficiently in learning\\\" (1998)\\n[4] Toulis and Airoldi, \\\"Asymptotic and finite-sample properties of estimators\\nbased on stochastic gradients\\\" (2017)\\n[5] Li et al, \\\"Statistical inference using SGD\\\" (2017)\\n[6] Chen et al, \\\"Statistical Inference for Model Parameters in Stochastic Gradient Descent\\\" (2016)\\n[7] Kushner and Yin, \\\" Stochastic approximation and recursive algorithms and applications\\\" (2003)\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper compares SGD and SVRG (as a representative variance reduced method) to explore tradeoffs. Although the computational complexity vs overall convergence performance tradeoff is well-known at this point, an interesting new perspective is the comparison in regions of interpolation (where SGD gradient variance will diminish on its own) and label noise (which propogates more seriously in SGD vs SVRG). The analysis is done on a simple linear model with regression, with some experiments on simulations, MNIST, and CIFAR.\\n\\nOverall, I find the paper insightful and the nice and neat breakdowns of the sources of noise nicely interpretable. A weakness is that the regression model and linear separation is a bit oversimplified, and may not really capture the subtleties in deeper models. However, I didn't find the conclusions particularly controversial, so it's not obvious that the model is wrong--just very simple. \\n\\nHow are step sizes chosen in the experiments? In general, a huge benefit of variance reduction is the ability to use constant step sizes. Can the authors elaborate on a comparison between SGD with decaying step size vs SVRG with constant step size?\\n\\nOne suggestion I would push for is to extend the experiments in the plots. In a lot of cases it doesn't really seem like the experiment is done running, e.g. fig 2 (b), 4 (b), and it's hard to make sweeping statements about the final loss without running to that point. Since many of the experiments seem to be on relatively small datasets and easier models, this should not be too burdensome.\\n\\nWhile I like the breakdown of M vs m (for when the data is i.i.d.), I would say that the assumption that data is i.i.d. is not very realistic. That being said this is not a huge negative for this paper because both scenarios are considered.\", \"minor_stuff\": \"typo in theorem 4 (decay rate)\", \"post_rebuttal\": \"I read the comments and all the concerns are addressed. I don't really have any more major concerns about the paper.\"}",
"{\"comment\": \"Hi,\\nI just read your paper.\\n\\nIn addition to (Sebbouh et al., 2019) you might also find the (same) method in https://arxiv.org/abs/1805.00982 of relevance.\\n\\nIt would be very interesting to see a comparison of SGD and SVRG in deep learning settings. Did you do such comparisons?\\n\\nAlso, methods like SCDG (https://arxiv.org/abs/1609.03261) seem to be a bit cheaper than SVRG; do you know if they perform better (or more efficiently) than SVRG?\\n\\nThanks,\", \"title\": \"SCDG vs SGD?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper examines the tradeoffs between applying SVRG and SGD for training neural networks by providing an analysis of noisy least squares regression problems as well as experiments on simple MLPs and CNNs on MNIST and CIFAR-10. The theory analyzes a linear model where both the input $x$ and label noise $\\\\epsilon$ follow Gaussian distributions. Under these assumptions, the paper shows that SVRG is able to converge to a smaller neighborhood at a slower rate than SGD, which converges faster to a larger neighborhood. This analysis coincides with the experimental behavior applied to neural networks, where one observes when training underparameterized models that SGD significantly outperforms SVRG initially, but SVRG is able to attain a lower loss value asymptotically. In the overparameterized regime, SGD is demonstrated to always outperform SVRG experimentally, which is argued to coincide with the case where there is no label noise in the theory.\", \"strengths\": \"I liked how the authors distinguished between the underparameterized and overparameterized regimes in the analysis and experiments. This allowed them to observe different behavior between the two regimes when comparing SVRG and SGD. I also found the authors' setting of analyzing noisy least squares problems to be interesting because of its potential usefulness for both analytically and empirically understanding certain forms of DL phenomena. The introduction is also well-written.\", \"weaknesses\": \"One aspect that I found unclear about the paper is its definition of the SVRG algorithm. In the analysis, the paper examines the expected risk least squares problem, and (if I understand correctly), considers the version of SVRG where the snapshot gradient is sampled i.i.d. over a large batch randomly from the true distribution. This is in contrast to the original SVRG method, which was designed for the empirical risk (or finite-sum) problem, where the set of datapoints is fixed. This coincides with the experiments, where the full training set is used to evaluate the snapshot gradient. Is this the correct interpretation of the theoretical and experimental results?\\n\\nIf so, how does this theoretical version of SVRG compare to a stochastic gradient method with large batch size? Does the theoretical behavior and insights exhibited by SVRG differ significantly from the theoretical behavior of SGD with larger batch size? \\n\\nIn addition, is the noisy least squares regression model with a diagonal data covariance equivalent to a separable quadratic problem? If so, it may not be surprising that the expected second moment of each parameter would evolve independently from each other, as noted at the end of Section 2.\\n\\nI also found some of the theorems and proofs difficult to follow. This is partly due to some inconsistent notation: what is $B$ (vs $b$) (pg. 4)? Is $A = M$ (pg. 4)? What does $\\\\circ$ denote in the exponents in the Appendix? What is the meaning of the constants defined in Definition 1? Some further explanation of the theoretical results (such as the meaning of those constants and more directly comparing the bounds for SVRG and SGD) would help with interpreting their results, particularly Theorem 4. \\n\\nAlong these lines, is it true that the rate of convergence for SGD is faster than the rate for SVRG? The constants made this difficult to tell, and no explanation was provided (although this was claimed in the Experiments section).\\n\\nMost steps in the proof were also left unexplained, which made it difficult to follow without knowledge of certain properties of multivariate Gaussians. Some necessary assumptions were also missing from the definition of the model; in particular, the paper did not specify the relationship between $\\\\epsilon_i$ and $x_i$ (which I assume are independent). \\n\\nThe experiments could also certainly be reinforced with some larger scale experiments on some larger datasets (such as ImageNet). One could see that the results became much more messy in the case of the underparameterized CNN on CIFAR-10 for example, and I wonder if this phenomena still holds with much larger datasets.\", \"some_additional_typos\": [\"Should use \\\\citep for the Johnson & Zhang reference at the end of page 3\", \"SVRG Dynamics and Decay \\\"R\\\"ate in page 5\", \"Overall, although the paper provides an interesting observation and direction in contrasting the underparameterized and overparameterized regimes when comparing SVRG and SGD for training DNNs, in my opinion, the paper needs some additional refining, particularly in terms of clarity with respect to the theory and notation, and perhaps some more experiments. If I understand the theoretical and empirical SVRG algorithms correctly, I'm not currently convinced that the paper provides substantially more theoretical insight than before due to differences between the theoretical and empirical SVRG methods applied in this paper and the theoretical algorithm's similarity to large-batch SGD. The observation in the underparameterized regime, for example, has been highlighted in prior work even with logistic regression (particularly due to the cost of evaluating a full gradient), and a theoretical comparison of small-batch SGD and large-batch SGD neighborhood results for strongly convex problems (see Bottou, Curtis, and Nocedal (2018), for example) would lead to similar conclusions. Because of these reasons, I do not recommend this paper for publication at this time.\"]}"
]
} |
BJg15lrKvS | Towards Understanding the Spectral Bias of Deep Learning | [
"Yuan Cao",
"Zhiying Fang",
"Yue Wu",
"Ding-Xuan Zhou",
"Quanquan Gu"
] | An intriguing phenomenon observed during training neural networks is the spectral bias, where neural networks are biased towards learning less complex functions. The priority of learning functions with low complexity might be at the core of explaining generalization ability of neural network, and certain efforts have been made to provide theoretical explanation for spectral bias. However, there is still no satisfying theoretical results justifying the existence of spectral bias. In this work, we give a comprehensive and rigorous explanation for spectral bias and relate it with the neural tangent kernel function proposed in recent work. We prove that the training process of neural networks can be decomposed along different directions defined by the eigenfunctions of the neural tangent kernel, where each direction has its own convergence rate and the rate is determined by the corresponding eigenvalue. We then provide a case study when the input data is uniformly distributed over the unit shpere, and show that lower degree spherical harmonics are easier to be learned by over-parameterized neural networks. | [
"spectral bias",
"neural networks",
"towards",
"deep learning towards",
"deep",
"intriguing phenomenon",
"complex functions",
"priority",
"functions",
"low complexity"
] | Reject | https://openreview.net/pdf?id=BJg15lrKvS | https://openreview.net/forum?id=BJg15lrKvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"MKn75xxkem",
"rkxx_cYzsB",
"SkgDAPKfiS",
"rygGJvFzjH",
"SJlRf3oiKS",
"HJlZsUhVYS",
"S1e1WhjGFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749592,
1573194343643,
1573193679197,
1573193434168,
1571695638492,
1571239577205,
1571105782741
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2457/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2457/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2457/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2457/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors propose to understand spectral bias during training of neural networks from the perspective of the NTK. While reviewers appreciated aspects of the work, the general consensus was that the current version is not ready for publication; some concerns stem from whether the the NTK model and finite neural networks are sufficiently similar that we should be able to gain real practical insights into the behaviour of finite models. This is partly an empirical question, and stronger experiments are required to have a better sense of the answer. Nonetheless, the authors are encouraged to persist with this work, taking into account reviewer comments in future revisions.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for you detailed and helpful comments. We address your questions as follows.\\n\\nQ1. \\u201cfrom a mathematical point of view\\u2026 making this statement precise.\\u201d\\nA1. We agree that our result is intuitive. But we believe this is an advantage of our result, instead of a weak point. We also would like to point out that although the results match intuition, the proof is by no means trivial, especially because it only relies on milder over-parameterization conditions to learn the components of the target function with lower complexity. Given the fact that such a result has not been shown in previous work, closing this gap between mathematical intuition and rigorous theoretical analysis is indeed one of our contributions.\\n\\n\\nQ2. \\u201cI am skeptical about some of the implications for practitioners\\u2026 the NTK regime\\u201d\\nA2. Thanks for pointing it out. Our analysis is indeed in the NTK regime. However, we would like to emphasize that by focusing on the low complexity components of the target function, we have greatly improved the over-parameterization condition in standard results in the NTK regime (Du et al. (2018b)). Therefore, we believe our work helps pushing the study of neural networks in the NTK regime towards a more practical setting. We would also like to point out that the optimization method studied in this paper is standard gradient descent with a practically used initialization method. For these reasons, we believe that our results are also of great practical value. In the revision, we have emphasized that our analysis is in the NTK regime. However, we believe that the spectral bias phenomenon can be rigorously proved in other regimes of neural network training.\\n\\n\\nQ3.(a). \\u201cwhether some of the assumptions of their theorem are really met in practice. For example, the required sample size for higher order polynomials grows exponentially fast with the order and the required step size goes to zero exponentially fast.\\u201d\\nA3.(a). Thanks for your question. We have added a remark (Remark 3.9) to explain such exponential dependency. Here we would like to emphasize that instead of checking the rate in terms of $k$, a more reasonable measure should probably be the relation between $n$ and the number of independent function components being learned, which is $r_k$. Intuitively speaking, based on $n$ samples, it is only reasonable to expect learning less than or equal to $n$ independent components of the true function, and the exponential dependency in $k$ is a natural consequence of the fact that in the high dimensional space, there are a large number of linearly independent polynomials even for very low degrees. From this we can see that $n \\\\geq r_k = \\\\Omega(d^{k-1})$ is not an artifact, and is a reasonable and unavoidable assumption. In fact, even if we know that the target function is exactly a polynomial with degree less than or equal to $k$, it still requires exponentially many samples to fit this polynomial, since the number of coefficients in a high-dimensional polynomial is exponential in the degree of the polynomial.\\n\\n\\nQ3.(b). \\u201cDoes this really correspond to what is observed in practice? (Or is this a mere artifact of training in the NTK regime?) Is this what one observes in the experiments by Ramahan?\\u201d\\nA3.(b). The effect of different sample sizes are not considered in the experiments in Rahaman et al. (2018), and therefore no empirical observation conflicts with our theory. In fact, the discussion in Rahaman et al. (2018) below Theorem 1 actually matches our calculation in the setting $k \\\\gg d$, which backs up our theory. Moreover, we would also like to emphasize that all our results regarding spherical harmonics and the exponential dependency in their degrees are only a special case of Theorem 3.2 when the data inputs are uniformly sampled from unit sphere. Such exponential dependency does not necessarily exist for other input distributions. \\n\\n\\nQ4. About typos and presentation\\nA4. Thank you for pointing out these typos and presentation issues. We apologize for the typos and unclear statements. We have improved the presentation of our paper and fixed typos in the revision. \\n\\n\\nQ5. \\u201conly considering Fig. 1\\u2026 convergence rates of the different components are truly linear.\\u201d\\nA5. We admit that the figure under linear scale is not enough to show the linear convergence rate, and have added the same curves in log scale in Appendix E.2. In log scale we can now see that the curves indeed demonstrate linear convergence.\\n\\n\\nQ6. \\u201cin my view, one should explain the limitations of the theory more carefully.\\u201d\\nA6. Thanks for your suggestion. We have rephrased Remark 3.3 and mentioned in Section 1 and Section 5 to make it clear that there is still a gap between theory and practice in terms of the spectral bias of neural networks.\\n\\n\\nWe hope you find your concerns satisfactorily addressed by our response, which has also been reflected in the revised paper. Please let us know if you have more comments or any other suggestions.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your insightful comments! We address your questions as follows:\\n\\nQ1. \\u201cAs far as I can tell, the proof applies strictly vanilla SGD... practical side of the field.\\u201d\\nA1. Our current analysis focuses on vanilla gradient descent. However, we believe that other practically useful algorithms like ADAM should exhibit similar spectral bias phenomenon. Combining our current analysis with the analysis in Wu et al. (2019) and Zhou et al. (2018) can potentially provide similar result for ADAM-type algorithms, and this can be an interesting and promising future work direction. We have added some discussion in Section 5.\\n\\n\\nQ2. \\u201cGiven that the kernel depends on the loss function, and it's the eigenspectrum of the kernel's integrator operator that determines the convergence properties, can this work be applied to engineering better loss functions for practical applications?\\u201d\\nA2. Thank you for your great question. We would like to clarify that the definition of the neural tangent kernel should be independent of the loss functions. On the other hand, the kernel function actually depends on the neural network architecture. This suggests that our work might be applied to engineering better architectures, or compare different architectures. For example, comparison on spectral properties of the kernels corresponding to ResNets, CNNs and fully connected networks may shed light on the design of more effective network architectures.\\n\\n\\nWe hope that our answers have addressed your questions. We also revised our paper accordingly. Any further comments on the paper are more than welcome.\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you very much for your helpful and positive comments. We address your questions as follows.\\n\\nQ1. \\u201cIt is argued that the new bound of... there isn't any improvement here.\\u201d\\nA1. Thank you very much for pointing out this issue. We believe that this is a misunderstanding caused by a typo and a misuse of the big-O notation, which we have fixed in the revision. $\\\\mu_k$ should be $\\\\Omega(\\\\max(k^{-d-1}, d^{-k+1}))$ instead of the minimum of the two terms. Since the convergence speed is characterized as $(1-\\\\mu_k)^t$, the larger $\\\\mu_k$ is, the faster gradient descent converges. We can see that our result is better then previous results when $d \\\\gg k$, since we provide a larger lower bound for $\\\\mu_k$.\\n\\n\\nQ2. \\u201cThe proof of spectral analysis is said to follow a similar outline... prior techniques?\\u201d\\nA2. Our proof uses the same technique as the proof of Bietti and Mairal (2019) for the $k \\\\gg d$ setting. The major difference between our result and Bietti and Mairal (2019)\\u2019s is that we also consider the case $d \\\\gg k$, which is a more practical setting. This leads to an improved characterization for the eigenvalue $\\\\mu_k$. We have emphasized the difference in Remark 3.6.\\n\\n\\nQ3. \\u201cThe proof operates in... the mildly overparameterized / non-NTK regime!\\u201d\\nA3. By far, we only focus on the NTK regime and extend previous results by presenting a more precise characterization of convergence result. However we would also like to emphasize that in Theorem 3.2, the over-parameterization requirement is only related to $\\\\lambda_{r_k}$, the $k$-th distinct eigenvalue of NTK. Therefore our theory indeed works for networks with milder over-parameterization, compared with many prior results (for example, Du et al. (2018b)). We have emphasized this in Remark 3.3. Analysis in non-NTK regime is beyond the scope of this paper, and can be an interesting future work direction.\\n\\n\\nQ4. \\u201cIn Section 4: the y-axis of the graph... freshly sampled points.\\u201d\\nA4. Thanks for the suggestion. We have changed the \\u2018error\\u2019s coefficient\\u2019 into \\u2018projection length\\u2019 and given a clearer definition. Ideally the error coefficient is the Gegenbauer coefficient of the residual function: $f^*(x) - \\\\theta f_{\\\\mathbf{W}^{(t)}}(x)$, which can be seen as the projection length onto the Gegenbauer polynomial. \\n\\nThe experiments are designed to demonstrate the result of our main theorem, which states that the residual of training data, projected to certain eigenfunctions, will decrease at a certain linear rate depending on the eigenvalues. So what we want to show by the experiments is merely about how the residual of training data behaves. That is actually the projection length onto the vectors defined by Gegenbauer polynomials. We admit that the original y-axis label and the word \\u2018Nystrom\\u2019 is not very appropriate. We have presented a more informative definition in our revised version.\\n\\nIn the case where freshly sampled points are used, what we can get following the same procedure is the residual function\\u2019s Gegenbauer coefficient, which can be seen as the projection length in function space. We also present these results in Appendix E.1\\n\\n\\nQ5. \\u201cI felt the proofs in the Appendix are very opaque... these convergence proofs).\\u201d\\nA5. We apologize for the unclear proofs. To improve readability, we have added more comments before each lemma in Section B.1 about the intuition and the role of these lemmas in the main proof.\\n\\n\\nThe response above has been reflected in our revised paper. Please let us know if you have any further suggestions.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper aims to provide theoretical justification for a \\\"spectral bias\\\" that is observed in training of neural networks: a phenomenon recorded in literature (Rahaman et al.), where lower frequency components of a signal are fit faster than higher frequency ones. The contributions of the paper are as follows:\\n1. Proves an upper bound on the rate of convergence on the residual error projected on top few eigenfunctions (of a certain integral operator). The upper bound is in terms of the eigenvalues of the corresponding eigenfunctions and is distribution independent.\\n2. Provides an upper bound on the decay of eigenvalues in the case of depth-2 ReLU networks and also a exact characterization of the eigenfunctions. While such upper bounds and the characterization of eigenfunctions existed in literature earlier, it is argued that the new bounds are better.\\n3. Combining the above two results, a justification is obtained for the \\\"spectral bias\\\" phenomenon that is recorded in literature.\\n4. Some toy experiments are provided to exhibit the spectral bias phenomenon.\", \"recommendation\": \"I recommend \\\"weak acceptance\\\". The paper takes a step towards explaining the phenomenon of spectral bias in deep learning. While concrete progress is made in the context of depth-2 ReLU networks (even though in NTK regime), perhaps the ideas could be extended to deeper networks.\", \"technical_comments\": [\"It is argued that the new bound of $O(\\\\mathrm{min}(k^{-d-1}, d^{-k+1}))$ is better than the bound of $O(k^{-d-1})$ from the previous work of Bietti and Mairal, in the regime where $d \\\\gg k$. I think there is a typo here. In the regime of $d \\\\gg k$, the bound $k^{-d-1}$ is the smaller one so both bounds are comparable. It is argued that $d \\\\gg k$ is the more relevant regime, but then there isn't any improvement here.\", \"The proof of spectral analysis is said to follow a similar outline as compared to the prior work of Bietti-Mairal, but it is not clear to me where this new proof deviates and improves on prior techniques? Or is it just a more careful analysis of the prior techniques?\", \"The proof operates in the \\\"Neural Tangent Kernel\\\" regime, by considering hugely overparameterized networks. This can be viewed as a negative thing, but then, most results in literature also operate in this regime and it is a major challenge for the field to prove results in the mildly overparameterized / non-NTK regime!\"], \"potential_suggestions_for_improvement\": [\"In Section 4: the y-axis of the graph is labeled \\\"error's coefficient\\\" which is non-informative. Is it $|a_k - \\\\hat{a}_k|$ ? I also had a question here about the proposed Nystrom method: Why is it okay to use the training points in the Nystrom method. Ideally, we should use freshly sampled points. Is there a justification for using the training points? If not, perhaps it is best to go with freshly sampled points.\", \"I felt the proofs in the Appendix are very opaque and it is hard to pinpoint what the new insight is (at least for a reader, like me, who does not have an in-depth familiarity with these convergence proofs).\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"I must qualify my review by stating that I am not an expert in kernel methods, and the mathematics in the proof is more advanced than I typically use. So it is possible that there are technical flaws to this work that I did not notice.\\n\\nThat being said, I found this to be quite an interesting paper. It provides a concise explanation for the types of features learned by ANNs: those that correspond to the largest eigenvalues of the kernel function. Because these typically correspond to the lowest-frequency components, this means that the ANNs tend to first learn the low frequency components of their target functions. This provides a nice explanation for how ANNs can both: a) have enough capacity to memorize random data; yet b) generalize fairly well in many tasks with structured input data. In the case of structured data, there are low frequency components that correspond to successfully generalized solutions.\\n\\nI have a few questions about the generality of this result, and its application to make better machine learning systems:\\n\\n1) As far as I can tell, the proof applies strictly vanilla SGD (algorithm 1). Would it be possible to extend this proof to other optimizers (say, ADAM)? That extension would help to connect this theory to the practical side of the field.\\n\\n2) Given that the kernel depends on the loss function, and it's the eigenspectrum of the kernel's integrator operator that determines the convergence properties, can this work be applied to engineering better loss functions for practical applications?\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the training of overparametrized neural networks by gradient descent. More precisely, the authors consider the neural tangent regime (NTK regime). That is, the weights are chosen sufficiently large and the neural network is sufficiently overparametrized. It has been observed that in this scenario, the neural network behaves approximately like a linear function of its weights.\\n\\nIn this regime, the authors show that, the directions corresponding to larger eigenvalues of the neural tangent kernel are learned first. As this corresponds to learning lower-degree polynomials first, the authors claim that this explains the \\\"spectral bias\\\" observed in previous papers.\\n\\n-I think that from a mathematical point of view, the main result of this paper is what one would expect intuitively: \\nWhen performing gradient descent with quadratic loss where the function to be learnt is linear, it is common knowledge that convergence is faster on directions corresponding to larger singular values. Since in the NTK regime, the neural network can be approximated by a linear function around the initialization one expects the behavior predicted by the main results. From a theoretical perspective, I see the main contribution of the paper as making this statement precise.\\n\\n-I am skeptical about some of the implications for practitioners, which are given by the authors: \\nFor example, on p.5 the authors write \\\"Therefore, Theorem 3.2 theoretically explains the empirical observations given in Rahaman et al. (2018), and demonstrates that the difficulty of a function to be learned by neural network should be studied in the eigenspace of neural tangent kernel.\\\" To the best of my knowledge, it is unclear whether practitioners train neural networks in the NTK regime (see, e.g., [1]). Moreover, I am wondering whether some of the assumptions of their theorem are really met in practice. For example, the required sample size for higher order polynomials grows exponentially fast with the order and the required step size goes to zero exponentially fast. Does this really correspond to what is observed in practice? (Or is this a mere artifact of training in the NTK regime?) Is this what one observes in the experiments by Ramahan?\\n\\nI think the paper is not yet ready for being published.\\n 1. There are many typos. Here is an (very incomplete) list.\\n -p. 2: \\\"Su and Yang (2019)\\\" improves the convergence...\\\"\\n -p. 2: \\\"This theorem gives finer-grained control on error term's\\\"\\n -p. 2: \\\"We present a more general results\\\"\\n -p. 4: \\\"The variance follows the principal...\\\"\\n -p. 4: \\\"...we will present Mercer decomposition in (the) next section.\\\"\\n2. I think that the presentation can be polished and many statements are somewhat unclear. For example, on p. 7 the authors write \\\"the convergence rates [...] are exactly predicted by our theory in a qualitative sense.\\\"\\n The meaning of this sentence is unclear to me. Does that mean in a quantitative sense? To be honest, only considering Fig. 1 I am not able to assess whether the convergence rates of the different components are truly linear.\", \"i_decided_for_my_rating_of_the_paper_because_of_the_following_reasons\": \"-I think that for a theory paper the results obtained by the authors are not enough, as they are rather direct consequences of the \\\"near-linearity\\\" of the neural network around the initialization.\\n-In my view, there is a huge gap between current theoretical results for deep learning and practice. For this reason, it is not problematic for me that it is unclear, what the results in this paper mean for practitioners. (Apart from that, results for the NTK regime are interesting in its own right.) However, in my view, one should explain the limitations of the theory more carefully.\\n-The presentation of the paper needs to be improved.\", \"references\": \"[1] A note on lazy training in supervised differentiable programming. L Chizat, F Bach - arXiv preprint arXiv:1812.07956, 2018 \\n\\n\\n\\n-----------------------------\\n\\nI highly appreciate the authors' detailed response. However, I feel that the paper does not contain enough novelty to justify acceptance.\\n\\n------\\n\\\"Equation (8) in Arora et al., (2019b) only provides a bound on the whole residual vector, i.e., , and therefore cannot show different convergence rates along different directions.\\\"\\n\\nWhen going through Section 4 , I think that it is implicitly stated that one has different convergence along different directions.\\n-----\\nFor this reason, I am not going to change my score.\"}"
]
} |
rJxycxHKDS | Domain Adaptive Multibranch Networks | [
"Róger Bermúdez-Chacón",
"Mathieu Salzmann",
"Pascal Fua"
] | We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others.
This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared.
As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously. | [
"Domain Adaptation",
"Computer Vision"
] | Accept (Poster) | https://openreview.net/pdf?id=rJxycxHKDS | https://openreview.net/forum?id=rJxycxHKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"S0BA16bsUub",
"jymmAlDhyV",
"5CmnutmWy",
"BygqFzoYjr",
"B1emJzstsS",
"SJl_mWjtiH",
"B1x4SgJRFS",
"HygvQy4otr",
"H1lBZPAc_B"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1594112622615,
1593948419227,
1576798749563,
1573659266049,
1573659099031,
1573658912143,
1571840060258,
1571663646648,
1570592508788
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2456/Authors"
],
[
"~Sumukh_Aithal_K1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2456/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2456/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2456/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2456/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2456/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2456/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Plasticity term and Initialization details\", \"comment\": \"Thanks for your interest in our work, and for giving us the chance to clarify on those issues.\\n\\n1. You are correct, there is a mismatch in the usage of the term \\\"plasticity\\\". As you point out, in Section 3.1 we call the plasticity p and in in Section 4.2 plasticity is 1 - p. What we wanted to achieve with the plasticity term is to modulate the learning rate of the gates, so a larger value of the plasticity should semantically mean 'more flexible gates'. So, a \\\"plasticity term that decays linearly as training progresses\\\" means that the gates become less and less flexible over time. Maybe a less unfortunate wording would then be \\\"the *parameter p* that controls the plasticity is initially set to a small value\\\", which is then consistent with \\\"plasticity (1-p) is decayed linearly as training progresses\\\". So, as p increases, 1-p decreases.\\n\\n2. For some experiments we initialize specific branches with parameters from other well-known networks such as ImageNet. We add a small random noise to avoid having the exact same parameters in different branches. Notice also that the gate parameters are part of each particular branch too, so differences in the gates will influence the training as well.\"}",
"{\"title\": \"Queries regarding the paper\", \"comment\": \"1. In Section 3.1 (page 4) of the paper it's mentioned that the plasticity is initially set to a small value and then increased as the training continues.But in the Section 4.2(Implementation Details), it is mentioned that the plasticity is decayed linearly as training progresses and plasticity is defined as 1-p.\\nPlease clarify the above points.\\n2. Are all the branches initialized with same values? If yes then won't both the branches have the same parameters and behave like a single branch? If no, then how are they initialized?\\nAnd could you please let us know when you plan on releasing the code?\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"Although some criticism remains for experiments, I suggest to accept this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comments incorporated in revised version\", \"comment\": \"Thank you for your thorough review. Here are our responses:\\n\\n> Experimental issues:\\n> - Comparison with other state-of-the-art UDA methods (e.g., CDAN) is a must.\\n> This paper improves UDA in terms of adaptive parameters sharing, which is\\n> completely independent from most of the UDA contributions (including the DANN\\n> you compared) which improve the distribution alignment between feature\\n> representations. Therefore, it is imperative to compare that line of SOTA\\n> methods, otherwise why should we consider adaptative parameters sharing instead\\n> of distribution alignment? At best, the proposed multiflow network combined\\n> with the SOTA feature alignment method (e.g., CDAN other than DANN) should be\\n> considered and expected to beat CDAN itself.\\n\\nAs acknowledged by R3, the contribution of our method happens at the representation extraction level, and is agnostic to the distribution alignment itself.\\nOur comparison to RPT was motivated by the fact that it is the closest approach to ours, and since RPT relies on DANN, we also used it. However, our approach can indeed be employed with other distribution alignment strategies, and after the submission deadline, we have experimented with the Maximum Classifier Discrepancy (MCD) term of Saito et al., 2018. Our new results evidence that this can further improve our performance and consistently outperforms the original MCD.\\n\\n> - Many ablation studies or hyperparameter sensitivity analyses are missing.\\n> o How do you determine the number of parallel flows, i.e., K? Is it possible\\n> that 3 or 4, more than 2 flows, are better even in the UDA between two domains?\\n\\nWe are including experiments studying this in the reviewed version of the paper. During the training process, we have observed that the networks quickly choose to ignore extra flows when K > D. This suggests that they did not contribute to the learning of our feature extraction. We did not find experimental evidence to support that K > D is beneficial.\\n\\n> o Do you try any other possibilities of grouping a computational unit, and\\n> how will different configurations influence the performance?\\n\\nThere are arbitrarily many ways to group layers into computational units, and in our experiments, we used the blocks naturally emerging from the original architecture, such as the convolutional blocks defined in ResNet-50.\\nAn extensive evaluation would require much more time than that available for this rebuttal, and we believe that our current results already show the benefits of our approach.\\n\\n> o Is there a possibility that none of the gates in the final layer is\\n> activated? Do you need some constraints?\\n\\nIt is theoretically possible that none of the gates be activated. In practice, we have found that both the classification signal (for supervised domains) and the feature alignment (for unsupervised ones) are enough to prevent this from happening.\\n\\n> - Since the authors mentioned the potential of the multi-flow network in\\n> adaptation between multiple domains, it is necessary to investigate\\n> multi-source or multi-target domain adaptation. Only in this case may the\\n> significance of different K values be demonstrated.\\n\\nWe have included experimental results using two and three source domains. The unsupervised multi-target domain problem is significantly more difficult and will require additional constraints. While we believe it to be an interesting problem, we consider it more suitable for future work.\\n\\n> - The baseline results in Table 1 are not comparable to some reported papers,\\n> and even lower than those reported in other UDA papers.\\n\\nWe have either used the results available on the respective references, when available, or used publicly available code to generate results under the suggested experimental conditions (number of training epochs, batch size, data preprocessing, optimization algorithm, learning rate). We have been careful not to introduce any biases from our method so as to make meaningful comparisons. It is, however, possible that under different experimental conditions other papers arrive to different results.\\n\\nWe have updated the manuscript to reflect the above changes.\\n\\nThank you for your input.\"}",
"{\"title\": \"Comments incorporated in revised version\", \"comment\": \"Thank you for your analysis. Here are our comments:\\n\\n> Although the idea of adaptive computation is not novel and has been explored,\\n> their application to the domain adaptation problem is novel to the best of my\\n> knowledge. Moreover, the proposed method is sensible and technically sound.\\n\\n> The submission talks about different amount of computation needed per domain\\n> as an intuition behind the method. This is sensible and intuitive; however,\\n> it has not been experimented. The paper uses the same amount of layers for\\n> all domains making the amount of computation exactly same. It would be\\n> interesting to see the performance when different paths actually lead\\n> different computations. For example, parallel blocks can have different\\n> number of layers etc.\\n\\nWe have included results showing the behavior under this setting in the revised version of the paper. In particular, we study both using different capacities for the flows and incorporating additional flows to have more flows than domains. The results are provided in Appendix C and show that, in the first case, the more complex domains tend to use flows with higher capacity, and in the second case, that learning tends to discard some flows when there are more than domains.\\n\\n> The submission only provides result for RPT and DANN. These are clearly not\\n> state-of-the-art domain adaptation methods. Proposed method does not\\n> necessarily need to have state-of-the-art adaptation results to be accepted,\\n> but not reporting what state-of-the-art performance is makes the experimental\\n> results incomplete.\\n\\nOur comparison to RPT was motivated by the fact that it is the closest approach to ours. Since RPT relies on the same domain classifier as DANN, DANN came as a natural baseline. However, we will further report the state-of-the-art performance.\\n\\nNote that most of the SOTA techniques can also benefit from our approach. To evidence this, we performed additional experiments by replacing the DANN domain classifier in our approach with the Maximum Classifier Discrepancy (MCD) term of Saito et al., 2018. This further improves our results and our approach consistently outperforms the original MCD.\\n\\n> Figure 4 suggests that there is no real parameter sharing at the end of the\\n> training. And, all domains have different computations. Authors should try to\\n> explain this behaviour since it is quite counter-intuitive.\\n\\nWe believe that there was some confusion. In Fig. 4, each row indicates how much each domain uses each flow. In each column, the same color indicates the same flow. As such, at the end of the training, Fig. 4 shows that, for example, all domains share the same flows in computational units conv2_x and conv5_x. We acknowledge that, in conv1, each domain uses its own private flow. This, we believe, confirms our intuition: Initially, the domains need to undergo different computations, because of their appearance differences, but can then share some of the following computations in later stages of the network. We hope that this clarifies the reviewer's concern.\\n\\n> In summary, proposed method is somewhat novel, interesting and seems to be\\n> working well. Improved discussion on the experimental study is definitely\\n> needed.\\n\\nWe have updated the manuscript to reflect the above changes.\\n\\nThank you for your input.\"}",
"{\"title\": \"Comments incorporated in revised version\", \"comment\": \"Thank you for your insights. Here are our responses:\\n\\n>- I would prefer to get the paper additionally linked to a few more transfer\\n>learning techniques out of the deep learning domain which is important as well\\n\\nWe focused our literature review on deep learning because our work proposes an\\napproach for deep architectures. For the sake of completeness, we will\", \"nonetheless_include_the_following_papers\": \"- Boosting for Transfer Learning [Dai07]\\n- Covariate Shift by Kernel Mean Matching [Gretton09]\\n- Domain adaptation via transfer component analysis [Pan10]\\n- Unsupervised Visual Domain Adaptation Using Subspace Alignment [Fernando13]\\n- Deep CORAL: Correlation Alignment for Deep Domain Adaptation [Sun16]\\n- A DIRT-T approach to unsupervised domain adaptation [Shu18]\\n\\n>- do you really need to call it (multi) flow network .... - a flow network is\\n>a well established concept in algorithmics and refers to a graph problem ...\\n>to avoid name clashes ...\\n\\nWe propose to rename our approach as Domain-Adaptive Multibranch Networks.\\n\\n>- in the references you have provided back links to the pages where the\\n>references are used - this is handy but also confusing and a bit unusual - I\\n>think it was not part of the standard template\\n\\nWe will abide by the template and remove the back links.\\n\\n>- please avoid using arxiv references but replace them by reviewed material.\\n>In parts I am willing to accept such kind of gray literature provided by well\\n>known authors but this should not become a standard habit\\n\\nWe have replaced the arxiv references with reviewed ones when available.\\n\\n>- I am happy to see that the code will be published - I hope this is really\\n>done, because from the material it maybe hard to reconstruct the method\\n\\nOur code is currently stored in a private github repository, which will be set as public once the blind review period ends.\\n\\nWe have updated the manuscript to reflect the above changes.\\n\\nThank you for your input.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper provides an unsupervised domain adaptation approach\\nin the context of deep learning. The motivation is clear, related work\\nsufficient and experimental settings and results convincing.\", \"i_have_only_very_minor_comments\": \"- I would prefer to get the paper additionally linked to a few more\\n transfer learning techniques out of the deep learning domain\\n which is important as well\\n- do you really need to call it (multi) flow network .... - a flow network\\n is a well established concept in algorithmics and refers to a graph problem\\n ... to avoid name clashes ...\\n- in the references you have provided back links to the pages where the references\\n are used - this is handy but also confusing and a bit unusual - I think it was not part \\n of the standard template\\n- please avoid using arxiv references but replace them by reviewed material. In parts\\n I am willing to accept such kind of gray literature provided by well known authors but\\n this should not become a standard habit\\n- I am happy to see that the code will be published - I hope this is really done, because\\n from the material it maybe hard to reconstruct the method\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"After Discussion Period:\\n\\nI stick to my original score. My issues are largely resolved.\\n\\n----\\nThe submission is using adaptive computation graphs for domain adaptation. Multi-flow network is the main architectural element proposed in the submission. And, it is composed of parallel blocks of computations which aggregated using weighted summation with learnable weights. The domain adaptation is performed by setting different weights for source and target dataset. The adaptive weights and network parameters are all learned jointly by minimizing the combination of classification loss and domain difference loss.\\n\\nAlthough the idea of adaptive computation is not novel and has been explored, their application to the domain adaptation problem is novel to the best of my knowledge. Moreover, the proposed method is sensible and technically sound.\\n\\nThe submission talks about different amount of computation needed per domain as an intuition behind the method. This is sensible and intuitive; however, it has not been experimented. The paper uses the same amount of layers for all domains making the amount of computation exactly same. It would be interesting to see the performance when different paths actually lead different computations. For example, parallel blocks can have different number of layers etc.\\n \\nThe submission only provides result for RPT and DANN. These are clearly not state-of-the-art domain adaptation methods. Proposed method does not necessarily need to have state-of-the-art adaptation results to be accepted, but not reporting what state-of-the-art performance is makes the experimental results incomplete.\\n\\nFigure 4 suggests that there is no real parameter sharing at the end of the training. And, all domains have different computations. Authors should try to explain this behaviour since it is quite counter-intuitive. \\n\\nIn summary, proposed method is somewhat novel, interesting and seems to be working well. Improved discussion on the experimental study is definitely needed.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors proposed to address the information asymmetry between domains in unsupervised domain adaptation. Innovatively, they resort to a multiflow network where each domain adaptatively selects its own pipeline. I quite appreciate the idea itself, while there are many essential issues to be addressed first.\", \"pros\": [\"The way tackling the information asymmetry, or untie weights, between domains is novel and interesting.\", \"The proposed network/framework can be easily extended to the multi-task setting, or multi-source/multi-target domain adaptation.\", \"The paper is well-written and easy to follow.\"], \"cons\": [\"The most critical downside of this paper is its insufficient experiments to support the whole idea, where we will detail in the next.\"], \"experimental_issues\": [\"Comparison with other state-of-the-art UDA methods (e.g., CDAN) is a must. This paper improves UDA in terms of adaptive parameters sharing, which is completely independent from most of the UDA contributions (including the DANN you compared) which improve the distribution alignment between feature representations. Therefore, it is imperative to compare that line of SOTA methods, otherwise why should we consider adaptative parameters sharing instead of distribution alignment? At best, the proposed multiflow network combined with the SOTA feature alignment method (e.g., CDAN other than DANN) should be considered and expected to beat CDAN itself.\", \"Many ablation studies or hyperparameter sensitivity analyses are missing.\", \"o\\tHow do you determine the number of parallel flows, i.e., K? Is it possible that 3 or 4, more than 2 flows, are better even in the UDA between two domains?\", \"o\\tDo you try any other possibilities of grouping a computational unit, and how will different configurations influence the performance?\", \"o\\tIs there a possibility that none of the gates in the final layer is activated? Do you need some constraints?\", \"Since the authors mentioned the potential of the multi-flow network in adaptation between multiple domains, it is necessary to investigate multi-source or multi-target domain adaptation. Only in this case may the significance of different K values be demonstrated.\", \"The baseline results in Table 1 are not comparable to some reported papers, and even lower than those reported in other UDA papers.\"]}"
]
} |
r1eyceSYPr | Unbiased Contrastive Divergence Algorithm for Training Energy-Based Latent Variable Models | [
"Yixuan Qiu",
"Lingsong Zhang",
"Xiao Wang"
] | The contrastive divergence algorithm is a popular approach to training energy-based latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models. | [
"energy model",
"restricted Boltzmann machine",
"contrastive divergence",
"unbiased Markov chain Monte Carlo",
"distribution coupling"
] | Accept (Spotlight) | https://openreview.net/pdf?id=r1eyceSYPr | https://openreview.net/forum?id=r1eyceSYPr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"vXGMAZ6_Ivy",
"tp2NUmCFeb",
"IFE3BkN3Uy",
"z8c843-G0",
"eRRMUr5h0W",
"oSCSp4iHQH",
"5WBMTct6Ho",
"Syx7qLS3sB",
"SJx1krBhir",
"BygfyEHhiS",
"BkxfqjaaFH",
"rkgmSvqotB",
"r1xivrLjYS"
],
"note_type": [
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1588286100072,
1582140565742,
1581871044029,
1581171949372,
1580509397803,
1578460937541,
1576798749534,
1573832330817,
1573831894557,
1573831642295,
1571834761924,
1571690298579,
1571673442899
],
"note_signatures": [
[
"~Jianwen_Xie1"
],
[
"~Mayank_Kakodkar1"
],
[
"ICLR.cc/2020/Conference/Paper2455/Authors"
],
[
"~Mayank_Kakodkar1"
],
[
"ICLR.cc/2020/Conference/Paper2455/Authors"
],
[
"~Jianwen_Xie1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2455/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2455/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2455/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2455/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thanks\", \"comment\": \"Thank you so much !\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thank you so much!\"}",
"{\"title\": \"A relevant and interesting paper\", \"comment\": \"Thank you, Mayank. This is a very relevant and interesting paper, and I wish I could see it earlier. I will cite and add some discussions in a revision later.\"}",
"{\"title\": \"Great Work!\", \"comment\": \"I really enjoyed reading your paper!\\n\\nCould you please consider citing [1].\\n[1] proposed a method called Markov Chain Las Vegas(MCLV), which computes unbiased estimates of the RBM gradient.\\nMCLV uses random walk tours that begin and end at a 'supernode', which is an aggregation of high probability states called constructed using the training data. [1] also proposed a biased version of this estimator called MCLV-K which considers tours of lengths upto K and showed both theoretically and empirically that since the tour length distribution has a geometrically decaying tail, the bias is minimal.\\n\\n[1] Pedro Savarese, (Mayank Kakodar), Bruno Ribeiro, From Monte Carlo to Las Vegas: Improving Restricted Boltzmann Machine Training through Stopping Sets, AAAI, 2018 [arXiv:1711.08442]\"}",
"{\"title\": \"References added\", \"comment\": \"Thank you Jianwen. It took me a while to read and absorb these papers, and now they have been added.\"}",
"{\"title\": \"related work about EBM using neural nets\", \"comment\": \"Excellent work !\\n\\nMight have been nice to cite and discuss some prior work about EBMs using neural nets as energy functions, such as\\n\\n[1] A Theory of Generative ConvNet. \\nJianwen Xie *, Yang Lu *, Song-Chun Zhu, Ying Nian Wu (ICML 2016)\\n\\n[2] Synthesizing Dynamic Pattern by Spatial-Temporal Generative ConvNet\\nJianwen Xie, Song-Chun Zhu, Ying Nian Wu (CVPR 2017)\\n\\n[3] Learning Descriptor Networks for 3D Shape Synthesis and Analysis\\nJianwen Xie *, Zilong Zheng *, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu (CVPR) 2018 \\n\\n[4] Learning generative ConvNets via multigrid modeling and sampling. \\nR Gao*, Y Lu*, J Zhou, SC Zhu, and YN Wu (CVPR 2018). \\n\\n[5] On learning non-convergent non-persistent short-run MCMC toward energy-based model. \\nE Nijkamp, M Hill, SC Zhu, and YN Wu (NeurIPS 2019)\\n\\nThanks.\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"Main content:\\n\\nBlind review #1 summarizes it well:\\n\\nThe paper proposes an algorithmic improvement that significantly simplifies training of energy-based models, such as the Restricted Boltzmann Machine. The key issue in training such models is computing the gradient of the log partition function, which can be framed as computing the expected value of f(x) = dE(x; theta) / d theta over the model distribution p(x). The canonical algorithm for this problem is Contrastive Divergence which approximates x ~ p(x) with k steps of Gibbs sampling, resulting in biased gradients. In this paper, the authors apply the recently introduced unbiased MCMC framework of Jacob et al. to completely remove the bias. The key idea is to (1) rewrite the expectation as a limit of a telescopic sum: E f(x_0) + \\\\sum_t E f(x_t) - E f(x_{t-1}); (2) run two coupled MCMC chains, one for the \\u201cpositive\\u201d part of the telescopic sum and one for the \\u201cnegative\\u201d part until they converge. After convergence, all remaining terms of the sum are zero and we can stop iterating. However, the number of time steps until convergence is now random.\", \"other_contributions_of_the_paper_are\": \"1. Proof that Bernoulli RBMs and other models satisfying certain conditions have finite expected number of steps and finite variance of the unbiased gradient estimator.\\n2. A shared random variables method for the coupled Gibbs chains that should result in faster convergence of the chains.\\n3. Verification of the proposed method on two synthetic datasets and a subset of MNIST, demonstrating more stable training compared to contrastive divergence and persistent contrastive divergence.\\n\\n--\", \"discussion\": \"The main objection in reviews was to have meaningful empirical validation of the strong theoretical aspect of the paper, which the authors did during the rebuttal period to the satisfaction of reviewers.\\n\\n--\", \"recommendation_and_justification\": \"As review #1 said, \\\"I am very excited about this paper and strongly support its acceptance, since the proposed method should revitalize research in energy-based models.\\\"\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Increased size of data set and model\", \"comment\": \"Thanks for the constructive comments on numerical experiments, and we have adopted the suggestion to compute on a large data set (the full Fashion-MNIST) with a large model (1000 hidden units). We have also included many other discussions such as the variance of UCD and the computational cost. The detailed responses are as follows.\\n\\n>>> Q1) In [Tieleman2008] log-likelihood values for the full MNIST dataset using a) a small model (25 hidden units) where the likelihood is computed exactly and b) an bigger model (500 hidden units) where the likelihood is approximated. On the full MNIST dataset they train using PCD, CD-1, CD-10 and report approximately Log-Likelihoods of -130 and -85 for the small and large models respectively. My questions are:\\n>>> Q1.1) In figure 4 you report approximate log-likelihood values on MNIST (only digits zero) of -150 for the different samplers using an RBM with100 hidden units. That seems to be lower performance than the models in [Tieleman2008] while training on a presumably easier dataset?\\n\\nIn general the likelihood values are not comparable with different data sets. In the updated version we have trained a much larger model (1000 hidden units) with the full Fashion-MNIST data. We hope the new experiment is more convincing.\\n\\n>>> Q1.2) In figure 4. Can you comment a bit on the variance of your method which seems to be higher, Is there a Bias/Variance trade-off between UCD and e.g PCD?\\n\\nYes, in the updated version we discuss the variance of UCD in Appendix C.\\n\\n>>> Q1.3) [Tieleman2008] Reports training times of 1 to 9 Hours for training in on the full MNIST dataset in 2008 and [Hinton 2006] trained large RBMs in 2006. Why is that setting then computationally time-consuming today in your setup - Is there some difference in the setup that I'm missing? \\n\\nWhat we meant about \\\"time-consuming\\\" is the following: we found that the MNIST data set was quite \\\"benign\\\", or \\\"robust\\\", in the sense that even biased algorithms such as CD can train a reasonably good model. Therefore, it may take a very long time to actually observe the divergence of CD (recall that even in the small BAS data, it takes thousands of iterations). But on the Fashion-MNIST data that we use in the updated manuscript, it is easy to see the differences of CD, PCD, and UCD, even with a small number of iterations.\\n\\n>>> Q1.4) I highly value enlightening small scale experiments and do understand that computational resources are not available everywhere however I think it would benefit the paper greatly if the proposed method is demonstrated on some reasonably sized dataset (at the very least one of full MNIST, Fashion MNIST, FreyFaces).\\n\\nThanks for the recommendation. We have updated our experiment based on the full Fashion-MNIST data.\\n\\n>>> Q1.5) In Figure 2 you show some interesting figures for the average stopping time and number of rejected samples on the BAS toy dataset. How does these results look on a real dataset like the MNIST zero digit data?\\n\\nIn fact we have included such results in Appendix B. In the new manuscript the plot for Fashion-MNIST is in Figure 11.\"}",
"{\"title\": \"Improved numerical experiments\", \"comment\": \"Thanks for the suggestions. We have added more comparisons in the new version.\\n\\n>>> The only question I have is why CD has only been tried with k=1 steps? \\nI would be interested in its performance for different number of steps including the dynamically chosen number provided by the empirical \\\\tau in UCD for a given iteration.\\nEven though I do not expect a significant improvement to be obtained, this would separate the effect of the number of steps chosen \\u201cright\\u201d from unbiasedness of the gradient estimator.\\nOther baselines, including those mentioned in the related work, could also make the comparison more complete.\\n\\nWe have significantly improved the numerical experiments with larger models, and have included CD-k algorithms with larger k in Appendix B.\\n\\n>>> I would also suggest including https://arxiv.org/abs/1905.04062, as it seems to be relevant in the spirit.\\n\\nWe have added this article to our reference.\"}",
"{\"title\": \"Improved experiments and various fixes\", \"comment\": \"Thanks for the helpful comments and corrections. We have included many new numerical results in the updated manuscript, and below are our line-to-line responses.\\n\\n>>> 1. I don\\u2019t think Corollary 1 (convergence of gradient descent to the global optimum) is true for RBMs, as stated on Page 6. This is because the log-likelihood of RBM, or indeed any latent-variable model with permutation-invariant latents, is non-convex. I would suggest removing this corollary and simplifying Algorithm 2 to be regular SGD, as used in the experiments.\\n\\nFor Theorem 1 and Algorithm 2 we do not assume a specific model for $p(v,h)$, and Corollary 1 is mainly used to demonstrate a typical convergence result for SGD. Indeed it does not apply to RBM as the objective function is not convex, so on top of page 6 we have mentioned that there are other versions of the theorem, for different types of objective functions. We have made this clearer in the updated manuscript.\\n\\n>>> 2. There is no experimental comparison of Algorithm 1 (the general version) and Algorithm 3 (the specialized RBM version). It seems intuitive that the specialized version should have lower computation time, but this must be confirmed.\\n\\nThanks for pointing out. We have added this comparison in Appendix A.2.\\n\\n>>> 3. The experimental section may be significantly improved.\\n>>> * It is unclear what value of k (number of initial Gibbs steps) from Algorithm 2 is used.\\n\\nIn all our experiments we set k=1. We have added this point to the text.\\n\\n>>> * The experiments on just the \\u201c0\\u201d digits of MNIST seem a bit simplistic for the year 2019. It is also not clear what binarization protocol is used.\", \"we_have_significantly_increased_the_size_of_the_experiment\": \"a full Fashion-MNIST data set with n=1000 hidden units. We treat data values as probabilities and globally binarize all the data points by sampling from Bernoulli distributions. The binarized data are then passed to the models.\\n\\n>>> * It would be very helpful to provide estimates of the gradient (not log-likelihood) variance of each method to better understand the trade-off between the bias and the variance.\\n\\nWe have included such an analysis in Appendix C.\\n\\n>>> * I would also like to see the wall-clock time comparison of the methods.\\n\\nWe have included the timing comparisons in Appendix B.\\n\\n>>> Minor comments\\n>>> * Page 1. Of this kind -> of this class. The data distribution p_v (v; theta) -> The model distribution\\n\\nCorrected.\\n\\n>>> * Page 2. Property -> properties. CD-\\\\tau -- I don\\u2019t think you can correctly refer to your method in this way, since it has at least double the computation time of CD for the same number of iterations.\\n\\nWe have removed this notation and changed the wording.\\n\\n>>> * Page 3. Provides -> provide. Likelihood gradient -> log-likelihood gradient\\n\\nCorrected.\\n\\n>>> * Algorithm 1 is an infinite loop with no break clause. It would be good to add a break statement after line 5. This would also simplify the discussion of the method.\\n\\nWe have added a maximum stopping time to Algorithm 1.\\n\\n>>> * Page 7. I wouldn\\u2019t call the fact that CD doesn\\u2019t converge on the BAS dataset remarkable, given that it\\u2019s been reported by Fischer & Igel 2014.\\n\\nFixed and rewritten.\\n\\n>>> * Page 9. The last paragraph stating that the proposed method is not a replacement for CD is confusing. Can you add a short experiment to demonstrate that this combination makes sense?\\n\\nYes, we have added an example in Appendix B.2 and Figure 9.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Based on recent progress in unbiased MCMC sampling the paper proposes an unbiased contrastive divergence (UCD) algorithm for training energy based models. Specifically they developed an unbiased version of the gibbs sampling contrastive divergence algorithm for training restricted Boltzman machines. The authors demonstrate their method on a toy dataset, simulated data, as well as a reduced version (only the zero digits) of the MNIST dataset and compare the results with the standard Contrastive divergence and Persistent Contrastive Divergence methods.\", \"score\": \"I find the line of work on unbiased estimators important and the (although i\\u2019m not an expert) the theory in the paper seems sound. Further the paper is well written and relatively easy to follow. However I do not find the experimental section completely comprehensive and some of the results seem to achieve worse performance than what is reported in the litterature for both the proposed method and baselines (see detailed questions below). Overall I currently score the paper as a weak reject although I can be convinced to bump the score depending on the author feedback.\", \"detailed_questions\": \"\", \"experimental_results\": \"Q1) In [Tieleman2008] log-likelihood values for the full MNIST dataset using a) a small model (25 hidden units) where the likelihood is computed exactly and b) an bigger model (500 hidden units) where the likelihood is approximated. On the full MNIST dataset they train using PCD, CD-1, CD-10 and report approximately Log-Likelihoods of -130 and -85 for the small and large models respectively. My questions are:\\nQ1.1) In figure 4 you report approximate log-likelihood values on MNIST (only digits zero) of -150 for the different samplers using an RBM with100 hidden units. That seems to be lower performance than the models in [Tieleman2008] while training on a presumably easier dataset?\\n\\nQ1.2) In figure 4. Can you comment a bit on the variance of your method which seems to be higher, Is there a Bias/Variance trade-off between UCD and e.g PCD?\\n\\nQ1.3) [Tieleman2008] Reports training times of 1 to 9 Hours for training in on the full MNIST dataset in 2008 and [Hinton 2006] trained large RBMs in 2006. Why is that setting then computationally time-consuming today in your setup - Is there some difference in the setup that I'm missing? \\n\\nQ1.4) I highly value enlightening small scale experiments and do understand that computational resources are not available everywhere however I think it would benefit the paper greatly if the proposed method is demonstrated on some reasonably sized dataset (at the very least one of full MNIST, Fashion MNIST, FreyFaces).\\n\\nQ1.5) In Figure 2 you show some interesting figures for the average stopping time and number of rejected samples on the BAS toy dataset. How does these results look on a real dataset like the MNIST zero digit data?\\n\\n[Tieleman 2008], Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient,\\n[Hinton 2006] Reducing the Dimensionality of Data with Neural Networks\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces an efficient, unbiased contrastive divergence-like algorithm for training energy-based generative models on the example of Restricted Boltzmann Machine.\\nThe proposed algorithm is built upon a very interesting work on unbiased finite-step MCMC approximations by Jacob et al, 2017.\\nDespite the actual theory being published some time ago, the submitted paper popularises these ideas in the machine learning community and contains optimised variants of the existing algorithms for training of RBMs.\\n\\nThe paper is mostly written well and does a good job of introducing unbiased MCMC estimators.\\nAuthors evaluate their method on rather toyish datasets (by modern standards), however, their empirical analysis is thorough. The improvement upon the standard CD and persistent CD is clear.\\nIt also appears that the algorithm actually does not require too many steps and generally does not introduce a lot of computational overhead. \\nThe only question I have is why CD has only been tried with k=1 steps? \\nI would be interested in its performance for different number of steps including the dynamically chosen number provided by the empirical \\\\tau in UCD for a given iteration.\\nEven though I do not expect a significant improvement to be obtained, this would separate the effect of the number of steps chosen \\u201cright\\u201d from unbiasedness of the gradient estimator.\\nOther baselines, including those mentioned in the related work, could also make the comparison more complete.\", \"i_would_also_suggest_including_https\": \"//arxiv.org/abs/1905.04062, as it seems to be relevant in the spirit.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an algorithmic improvement that significantly simplifies training of energy-based models, such as the Restricted Boltzmann Machine. The key issue in training such models is computing the gradient of the log partition function, which can be framed as computing the expected value of f(x) = dE(x; theta) / d theta over the model distribution p(x). The canonical algorithm for this problem is Contrastive Divergence which approximates x ~ p(x) with k steps of Gibbs sampling, resulting in biased gradients. In this paper, the authors apply the recently introduced unbiased MCMC framework of Jacob et al. to completely remove the bias. The key idea is to (1) rewrite the expectation as a limit of a telescopic sum: E f(x_0) + \\\\sum_t E f(x_t) - E f(x_{t-1}); (2) run two coupled MCMC chains, one for the \\u201cpositive\\u201d part of the telescopic sum and one for the \\u201cnegative\\u201d part until they converge. After convergence, all remaining terms of the sum are zero and we can stop iterating. However, the number of time steps until convergence is now random.\", \"other_contributions_of_the_paper_are\": \"1. Proof that Bernoulli RBMs and other models satisfying certain conditions have finite expected number of steps and finite variance of the unbiased gradient estimator.\\n2. A shared random variables method for the coupled Gibbs chains that should result in faster convergence of the chains.\\n3. Verification of the proposed method on two synthetic datasets and a subset of MNIST, demonstrating more stable training compared to contrastive divergence and persistent contrastive divergence.\\n\\nI am very excited about this paper and strongly support its acceptance, since the proposed method should revitalize research in energy-based models. While I find the experiments to be somewhat lacking, this is sufficiently offset by the theoretical contributions of the paper.\\n\\nPros\\n1. The paper reads well and introduces all the necessary preliminaries to understand the method. This is important, since I expect many readers to be unfamiliar with the technique.\\n2. The proposed method solves an important problem which, as far as I understand, has been the roadblock in large-scale training of RBMs and related models. It is also elegant and fairly straightforward to implement.\\n3. The proof of finite computation time and variance is very nice to have. This is because in some cases removing the bias leads to infinite variance, e.g. a parallel submission on SUMO (https://openreview.net/forum?id=SylkYeHtwr).\\n\\nCons\\n1. I don\\u2019t think Corollary 1 (convergence of gradient descent to the global optimum) is true for RBMs, as stated on Page 6. This is because the log-likelihood of RBM, or indeed any latent-variable model with permutation-invariant latents, is non-convex. I would suggest removing this corollary and simplifying Algorithm 2 to be regular SGD, as used in the experiments.\\n2. There is no experimental comparison of Algorithm 1 (the general version) and Algorithm 3 (the specialized RBM version). It seems intuitive that the specialized version should have lower computation time, but this must be confirmed.\\n3. The experimental section may be significantly improved.\\n* It is unclear what value of k (number of initial Gibbs steps) from Algorithm 2 is used.\\n* The experiments on just the \\u201c0\\u201d digits of MNIST seem a bit simplistic for the year 2019. It is also not clear what binarization protocol is used.\\n* It would be very helpful to provide estimates of the gradient (not log-likelihood) variance of each method to better understand the trade-off between the bias and the variance.\\n* I would also like to see the wall-clock time comparison of the methods.\\n\\nMinor comments\\n* Page 1. Of this kind -> of this class. The data distribution p_v (v; theta) -> The model distribution\\n* Page 2. Property -> properties. CD-\\\\tau -- I don\\u2019t think you can correctly refer to your method in this way, since it has at least double the computation time of CD for the same number of iterations.\\n* Page 3. Provides -> provide. Likelihood gradient -> log-likelihood gradient\\n* Algorithm 1 is an infinite loop with no break clause. It would be good to add a break statement after line 5. This would also simplify the discussion of the method.\\n* Page 7. I wouldn\\u2019t call the fact that CD doesn\\u2019t converge on the BAS dataset remarkable, given that it\\u2019s been reported by Fischer & Igel 2014.\\n* Page 9. The last paragraph stating that the proposed method is not a replacement for CD is confusing. Can you add a short experiment to demonstrate that this combination makes sense?\"}"
]
} |
HJlRFlHFPS | Unsupervised Distillation of Syntactic Information from Contextualized Word Representations | [
"Shauli Ravfogel",
"Yanai Elazar",
"Jacob Goldberger",
"Yoav Goldberg"
] | Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting. | [
"dismantlement",
"contextualized word representations",
"language models",
"representation learning"
] | Reject | https://openreview.net/pdf?id=HJlRFlHFPS | https://openreview.net/forum?id=HJlRFlHFPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"IWMijNANQw",
"SJgn9R5hsS",
"SyekktO3jB",
"SJgOLrZhiB",
"rJg-gS-2iS",
"H1lgdg3siH",
"HyxJmJuUjB",
"S1xa5Zz7sH",
"rJg2I-fQjr",
"ryx6n0b7iH",
"rygVr0-7oS",
"BJesp3WXsB",
"SJeeN3WmoS",
"Byey-Ljz5H",
"SklmVR-fqH",
"SygRa5LTKS",
"rJxPTT-IYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749507,
1573854867525,
1573845206700,
1573815631591,
1573815528554,
1573793895945,
1573449495017,
1573228948637,
1573228884220,
1573228212612,
1573228092146,
1573227715046,
1573227560056,
1572152822583,
1572113963167,
1571805893695,
1571327423278
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2454/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper aims to disentangle semantics and syntax inside of popular contextualized word embedding models. They use the model to generate sentences which are structurally similar but semantically different.\\n\\nThis paper generated a lot of discussion. The reviewers do like the method for generating structurally similar sentences, and the triplet loss. They felt the evaluation methods were clever. However, one reviewer raised several issues. First, they thought the idea of syntax had not been well defined. They also thought the evaluation did not support the claims. The reviewer also argued very hard for the need to compare performance to SOTA models. The authors argued that beating SOTA is not the goal of their work, rather it is to understand what SOTA models are doing. The reviewers also argue that nearest neighbors is not a good method for evaluating the syntactic information in the representations. \\n\\nI hope all of the comments of the reviewers will help improve the paper as it is revised for a future submission.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reply to response\", \"comment\": \"Thanks for taking the time to carefully re-read our paper. We regret that you still do not \\\"get\\\" what we were trying to achieve in this work. Most notably, we were *not* aiming at beating any other system. That is simply not the intention of the works. Our intention was to distill the structural representation encoded in contextualized vectors, in an unsupervised manner. That is, to produce a representation that captures as much as possible of the structure of the sentence and as little as possible of its lexical semantics.\\n\\nOur experiments are intended to measure how well this goal (to preserve structural properties) was achieved. As no other work that we are aware of tackled this goal before, the claims of \\\"unfair comparison\\\" (points 3a,b,c) seem irrelevant.\", \"responses_to_specific_comments\": \"2) as we are not trying to beat anyone, we also see no reason to explore hyper-parameters. We chose an initial setup (indeed, heuristically) that worked fine for our purposes, and left it at that.\\n3a) while we do not see it as a baseline, we did agree that the delexicalized setup is interesting. As we wrote in our response, we will perform these experiments for the final version of the paper (we didn't have time to properly do this during the response period, as the author responsible for the parsing experiments was travelling. But this will happen for camera ready).\\n3b) we are not trying to beat ELMo (or BERT), we are trying to figure out what is captured by them.\\n3c) The 1M wikipedia sentences are not weak supervision, they are used for evaluation: we perform clustering and search for nearest neighbours in over this space. While this setup is not standard in the literature, our goal is also non standard. We are not trying to \\\"win\\\" a parsing context, but to distill syntactic knowledge from a pre-trained LM. \\n3c.2) We did have 150,000 sentences that were POS-tagged by spaCy and used for deriving the training instances. As mentioned in our response to other reviewers, after submission we also experimented with a version that did not use the POS-tag information, getting similar results.\"}",
"{\"title\": \"Reply to response\", \"comment\": \"Thank you for responding. After a second *very close* reading of the updated paper and the authors' reply, I maintain that this paper is far from ICLR standard and will keep my score at 1.\\n\\n1. The authors propose to distill syntactic knowledge from the contextualized representation. However, the authors do not formalize the notion of \\\"syntactic knowledge,\\\" and it is unclear what exactly they hope to disentangle from these representations.\\n2. The authors approached this task by generating \\\"syntactically similar\\\" (also vaguely defined) sentence pairs using heuristics on BERT representations (Section 3.1). Given that this process is heuristic-drive, one would expect an analysis of the hyperparameters selected to achieve the goal. However, the authors performed no such analysis, and the hyperparameters of this generation process (k=6, top-30, ...) appear to be selected at random.\\n3. The authors fail to convince me that their method has any practical utility (Section 4.3):\\n a. Lack of a standard baseline. The authors do not address my concerns regarding the delexicalized parser baseline in the updated paper.\\n b. Unfair comparisons. In Figure 4, the author's proposed method \\\"Syntax\\\" uses BERT during training (for generating sentence pairs). It is unfair that their baselines only have access to ELMo embeddings. A standard fine-tuned BERT should be the minimum comparison.\\n c. Lack of standard datasets + automatically generated golden labels. The authors did not perform experiments on standard parsing datasets, and they fail to describe their parsing corpus in detail. According to Section 4 [Corpus], they used off-the-shelf spaCy parser to generate golden labels for 1M Wikipedia sentences. This practice is non-existent in literature. If weak-supervision (spaCy's output) is taken as golden labels, the authors should at least provide detailed statistics on their data, as well human-evaluation of the quality of those labels.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the meaningful comments!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your constructive review and appreciation of our work!\"}",
"{\"title\": \"AnonReviewer3 Response\", \"comment\": \"I am satisfied with the responses to my review and the others. I am raising my rating to 8: Accept.\"}",
"{\"title\": \"Thank you for your detailed response\", \"comment\": \"I appreciate the detailed response to my review. I think the revised paper is improved in terms of background and motivation, and more complete experiments. It's also good to see the quantitative clustering purity results.\\nWhile I don't think getting very high parsing results is a must for this work, I agree with reviewer 1 that comparing with a POS-based baseline is in order. \\nI hope to see the paper accepted and at this point will keep my current evaluation.\"}",
"{\"title\": \"Response to \\\"Main Comments\\\"\", \"comment\": \"# Responses to \\u201cMain comments\\u201d:\\n\\n1. Thank you for pointing this out, we strengthened the introduction on the motivation. As we see it, disentanglement is interesting for several reasons. From a purely scientific view, once disentanglement is achieved, one can better control for confounding factors and analyze the knowledge the model acquires, e.g. attributing the predictions of the model to one factor of variation while controlling for the other. In addition to explaining model predictions, such disentanglement can be useful for the comparison of the representations the model acquires to linguistic knowledge, e.g. by showing the model is or is not able to learn specific linguistic abstractions, or by contrasting the way certain phenomena are represented in the model with their representation in syntactic schemes defined by linguists.The latter option can be especially illuminating, as the right way to describe certain syntactic phenomena is still in dispute among linguistics.\\n\\nFrom a more practical perspective, disentanglement can be a first step toward controlled generation/paraphrasing that considers only aspects of the structure, akin to the style-transfer works in computer vision. For example, one can imagine creating variants of a sentence that ignore syntax while preserving semantics, or that mimic the syntactic structures favoured by an authors. But we leave this to future research.\\n\\n2. Thanks for pointing out to this, we tried to make the argument clearer this time. \\n\\n3. We have considered alternatives, such as element-wise absolute value of the difference, element-wise multiplication, and an average. We did not observe qualitative differences between the different ways to represent pairs, and chose the difference because its simplicity and because of the known literature on arithmetic of word vectors, to which you have referred. \\n\\n4. We experimented also with the last BERT layer alone, which performed worse. We added the full BERT results on the closest-word evaluation in the appendix. We agree that techniques such as a learned weighted average would probably yield better results and outperform ELMO. We did not try such methods due to time constraints. However, as we see the main contribution of the paper in the proposed method and in the demonstration of a proof-of-concept for unsupervised distillation of syntax, we think that the exact scores we get on, e.g., the parsing tasks, are relatively less important. \\n\\n5. We expanded the existing background material into a separate related work section, in order to better situate this research with respect to previous works. \\n\\n6. We added the exact numbers in the parsing experiments in an appendix. We agree that the differences are relatively small. However, with enough data, we cannot expect our representation to outperform ELMO, as our extraction process does not change the ELMO encoder but only transforms its output into a lower dimension, potentially discarding information. With enough direct supervision, the parser can probably directly extract the relevant information from ELMO vectors. We see the relatively significant LAS differences in very low data regime (50-100) as support for this hypothesis. We are not sure why the differences in the unlabeled setting are significantly smaller. \\n\\n7. Thanks for pointing this out. We focus on lexical semantics, and we make it more explicit in the text.\"}",
"{\"title\": \"Response to \\\"Other Comments\\\"\", \"comment\": \"# Responses to \\u201cOther comments\\u201d:\\n\\n1. We agree that for many purposes is it beneficial not to discard the semantic information altogether. Ideally, we would want to allow \\u201ctuning\\u201d the representations on the semantics-syntax axis, changing the saliency of one factor or another according to the task at hand. However, as a first-order approximation, in this work we did aim to discard lexical semantics. The triplet objective does not explicitly do that, although insofar as the the \\u201cequivalent\\u201d sentences are indeed lexically diverse, we think that the hard-negatives mining should discard lexical semantics (as if vectors are embedded in space according to lexical information, the hard negatives would be semantically similar to the anchor vector, increasing the loss). But indeed, in practice we do not succeed in discarding all semantic information.\\n\\n2. You are right, and we added a short discussion to the paper. We indeed observe that our method tends to conflate local syntactic distinctions like adjective-noun differences in environments where both can occur, in favor of more global distinctions (in this case, noun modifiers vs. other functions).\\n \\n3. The dimensionality of the transformed vector was chosen according to development set performance. As for the value of K, it is not very important, as in practice we sample only 10 pairs from each group of sentences. This information was indeed missing in the paper. We added a clarification. \\n\\n4. This is an important point, which we will make more salient in the text: we did not finetune the contextualized representations. We limited ourselves to extracting information that is already encoded in the vectors. This is consistent with our focus on disentangling existing representations rather than merely creating strong syntactic models.\\n\\n5. The current description in the paper is indeed unclear. We will rephrase those parts. The number of evaluation sentences in section 2.2 refers to the training of the model with the triplet loss: we did not use them for the closest-word query. The 1 million sentences that are mentioned in section 3 are another (different) section of wikipedia, which we used for the closest-word test. From this set we sampled 400,000 sentences as mentioned in 3.2, and evaluated the percentage of (query, value) pair that share the different properties. \\n\\n6. Thanks for the suggestion, we now added the purity measure to the section that presents the tsne results.\\n\\n7. We added this comparison. \\n\\n8. We added the closest-word results for BERT in an appendix.\\n\\n9. Thanks for the references! We will definitely expand the discussion on related work. We will look further into the style transfer literature. If you are aware of references that are especially relevant, we would appreciate it if you share them.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for you comments!\\n\\nWe are only aware of the work [1] which demonstrated the existence of a semantics-syntax tradeoff in word vectors that capture different orders of similarity. They projected pre-trained word vectors to different orders by a parameter-free transformation which derives from the similarity matrix, and measured the performance on semantic and syntactic tasks. We now mention this work in the revised paper. We would appreciate pointers to other works in this direction, if any of the reviewers are aware of them.\\n\\nThere are additional works which learn from scratch word embeddings that are tailored for syntax, some of them are mentioned by reviewer 2. These works are somewhat less relevant for the current study, as we aim to extract existing information from contextualized representations and make it more salient, rather than learning from scratch representations that capture syntax. Yet, we now note them in the new related work section.\\n\\n[1] Artetxe, Mikel, et al. \\\"Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation.\\\" arXiv preprint arXiv:1809.02094 (2018). APA\"}",
"{\"title\": \"Response\", \"comment\": \"We appreciate your constructive and detailed review!\\n\\nYour are right in noting that subtle differences in the surface level can mask substantial differences in the deep argument structure of the sentence, and that verbs are particularly sensitive in this respect. This is a limitation of the current approach, which we now acknowledge in the paper. Thanks for pointing this out. Indeed, in general we can expect the replacement process to yield a grammatical sentence with equivalent structures only to the extent that BERT implicitly encodes the grammatical restrictions that apply to the masked word (i.e., we can only capture raising vs control distinction to the extent BERT-like LM can captures them). While BERT is a powerful LM -- and that is the reason we used it rather than simple POS-based replacement -- it may at times violates some of those restrictions. As you point out, this reasoning behind the substitution process and the premises we made were not clearly stated in the paper, and made it clearer in the revisioned version. However, we note that the average sentences we generate seem grammatical, and do not diverge much from the structure of the original sentence; we therefore think this method does at least approximate our end goal of generating grammatical sentences of the same structure. \\nMoreover, we remind that our method attempts to uncover the structural information that is encoded in the neural LMs. Thus, we find it reasonable to not capture structural distinctions that are not reflected in current state-of-the-art neural LMs. \\n\\nThank you for pointing out to the works on vector-space arithmetic. This was our motivation for representing pairs as the difference between the corresponding word vectors, and we will explicitly mention that in the paper.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comments.\\n\\n>>in the method, the authors construct a dataset where each group of the sentence share similar syntactic structures (having the same POS tag). It seems there that the structural information just means POS tags.\\n\\nOur method definitely captures more than the POS tag of the words. For example, it clearly differentiates nouns in subject position from nouns in object position from nouns in relative clauses, and also differentiates nouns in passive constructions from those in active voice. This holds also for multiple usages of the same word, e.g the word \\\"dog\\\" will get different neighbours in \\\"the dog barked\\\" and \\\"he heard the dog bark\\\", even though both are nouns. Similarly for verbs, adjectives, and other classes.\\n\\nAs for the use of POS tags in the process of sentence generation, we agree this injects some syntactic bias to the model. Following the submission, we have performed experiments on datasets that are constructed without this using POS information, and got similar results, suggesting that the POS information is not necessary for our method. We need to work more to get detailed and robust results we can report, but we are in the process of doing so and will include these results in the camera ready version. \\n\\n>>what do you mean by structural information without a clear definition?\\n\\nBy \\u201cStructural information\\u201d we refer to properties that linguists would identify as \\u201csyntax\\u201d. We did not want to pre-specify the characteristics of this structural representation (e.g. by identifying it with a certain type of dependency or constituency representation scheme), as many competing frameworks exist, and the \\u201cright\\u201d ways to represent different phenomena are still in dispute among linguists. As discussed in the paper, defining the problem in an implicit manner (structure A is similar to B and C is similar to D, without specifying what constitutes this similarity) allows us to not rely on any specific annotation scheme, but rather extract the structural representations in an unsupervised way. However, in evaluation, we do compare our representations with linguistic notions of syntax, and show that to a large degree our representations capture those properties, although they were not trained to explicitly achieve this objective.\\n \\n>>In Figure 4, the authors should compare with delexicalized dependency parsing, which performs pretty well in los-resource setting.\\n\\nThank you for the suggestion. In this experiment, we have tried to cautionaly use controls: performing PCA on the ELMO vectors, and projecting them to a lower dimension using a learned linear transformation. The rationale behind these controls is to encourage the parser to discard irrelevant information from the ELMO vectors. We agree that a comparison to a POS-based parser is of interest, and can help test the claim we capture information beyond the POS level. We plan to perform this experiment for the camera ready version.\"}",
"{\"title\": \"General response.\", \"comment\": \"We thank all reviewers for their insightful comments. We updated the paper to account for some of them. Other comments will require more time to properly address, but we are working towards that as well.\\n\\nWe address individual reviewers comments in the responses to their reviews.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary\\nThe authors proposed to disentangle syntactic information and semantic information from pre-trained contextualized word representations.\\n\\nThey use BERT to generate groups of sentences that are structurally similar (have the same POS tag for each word) but semantically different. Then they use a metric-learning approach to learn a linear transformation that encourages sentences from the same group to have closer distance. Specifically, they defined a triplet loss (Eq4) and uses negative sampling.\\n\\nThey use 150,000 sentences from Wikipedia to train the transformation. POS tags are obtained from spaCy. To evaluate the learned representations, they provided a tSNE visualization of the original and transformed representations (groups by dependency label); evaluate whether the nearest neighbor shares the same syntactic role; low-resource parsing.\", \"reasons_of_rejection\": \"1. I don't agree with the authors' argument, \\\"we aim to extract the structural information encoded in the network in an unsupervised manner, without pre-supposing an existing syntactic annotation scheme\\\". First, what do you mean by structural information without a clear definition? Also, in the method, the authors construct a dataset where each group of the sentence share similar syntactic structures (having the same POS tag). It seems there that the structural information just means POS tags.\\n\\n2. The author failed to convince me that the learned representation is more powerful than just combining POS tags with the original representations. Since POS tags are assumed to be available during training. I think a reasonable baseline in all experiments would be the performance based on POS tags. For example, in Figure 3, although the original EMLo representation does not correlates with the dependency label very much, the POS tags may do. In Figure 4, the authors should compare with delexicalized dependency parsing, which performs pretty well in los-resource setting.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"Weak accept\\nREASONS FOR RATING (SUMMARY). Using deep learning to create an encoding of syntactic structure with minimal supervision is an important goal and the paper proposes a clever way of doing this. The only \\u2018supervision\\u2019 here comes from (i) the function/content-word distinction (C1 above): two grammatical sentences are structurally equivalent if [but not only if] one can be derived from the other by replacing one content word with another; and (ii) filtering candidate replacement words to match the POS of the replaced word. BERT\\u2019s ability to guess a masked word is put to good use in providing suitable content word substitutions. The experimental results are rather convincing.\\nREVIEW (beyond the summary above)\\nC1. This assumption is famously not deemed to be true in linguistics, where the structural difference between \\u2018control\\u2019 and \\u2018raising\\u2019 verbs is basic Ling 101 material: see https://en.wikipedia.org/wiki/Control_(linguistics)#Control_vs._raising. This particular structural contrast illustrates how verbs can differ in their argument structure, without there being function words to signal the difference. So substituting *verbs* in particular may be non-ideal for the purposes of this work. Even the third example given by the authors in Sec. 3.1 illustrates a related point, where function words do signal the contrast: while the meaning of \\u2018let\\u2019 and \\u2018allow\\u2019 may be very similar, their argument structures differ, so that replacing \\u2018lets\\u2019 with \\u2018allows\\u2019 in the first sentence, or the reverse in the second sentence, produces ungrammatical results: \\n*their first project is software that *allows* players connect the company \\u2019s controller to their device\\n*the city offers a route-finding website that *lets* users to map personalized bike routes\\nTherefore, contrary to the paper, relative to linguistic syntactic structure, it is not a good result that \\u2018lets\\u2019 in the original version of the first sentence is the closest neighbor in transformed embedding space to \\u2018allows\\u2019 in the second. Rather, it is probably meaning, not structure, that makes \\u2018let\\u2019 and \\u2018allow\\u2019 similar.\\nIt would improve the paper to make note of this general concern with C1 and to provide a response.\\nOn another point, an important premise of the proposed method (C2 above) is that differences in vector space embeddings encode relations; this has been used by a number of previous authors since the famous Mikolov, Yih & Zweig NAACL2013, and that work should be cited and discussed.\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"CONTRIBUTIONS:\", \"topic\": \"Disentangling syntax from semantics in contextualized word representations\\nC1. A method for generating \\u2018structurally equivalent\\u2019 sentences is proposed, based only on the assumption that maintaining function words, and replacing one content word of a source sentence with another to produce a new grammatical sentence, yields a target sentence that is equivalent to the source sentence. \\nC2. The \\u2018structural relation\\u2019 between two words in a sentence is modeled as the difference between their vector embeddings.\\nC3a. The structural relation between a pair of content words in one sentence is assumed to be the same as that between the corresponding pair in an equivalent sentence. \\nC3b. The structural relation between any pair of content words in one sentence is assumed to be different from the structural relation between any pair of content words in an inequivalent sentence. \\nC4. Given a selected word in a source sentence, to generate an alternative \\u2018corresponding\\u2019 content word for an equivalent target sentence, BERT is used to predict the source word when it is masked, given the remaining words in the source sentence. The alternative corresponding word is randomly selected from among the top (30) candidates predicted by BERT. Given a source sentence, the set of target sentences formed by cumulatively replacing content words one at a time in randomly selected positions defines an \\u2018equivalence set\\u2019 in which words in different sentences with the same left-to-right index are corresponding words. (To promote the formation of grammatical target sentences, a word is only replaced by another word with the same POS.) A pre-defined set of equivalence sets is used for training.\\nC5. A metric learning paradigm with triplet loss is used to find a function f for mapping ELMo or BERT word embeddings to a new vector space of \\u2018transformed word representations\\u2019. Implementing C2 and C3a, given the indices i and i\\u2019 of two content words, the triplet loss rewards closeness of the difference D between the transformed embeddings of the pair of words with these indices in sentence S and the corresponding difference D\\u2019 for an equivalent sentence S\\u2019. Implementing C3b, the triplet loss penalizes closeness between D and D\\u201d, where D\\u201d is the difference between transformed word embeddings of a pair of content words in a sentence S\\u201d that is inequivalent to S. (Eq. 4).\\nC6. (Implementing C5.) To form a mini-batch for minimizing the triplet loss, a set of (500) sentences S is selected, and for each a pair of indices of content words is chosen. Training will use the difference in the transformed embeddings of the words in S with these indices: call this D, and call the set of these (500) D vectors B. For each sentence S in B, a \\u2018positive pair\\u2019 (D, D\\u2019) is generated, where D\\u2019 is the corresponding difference for S\\u2019, a selected sentence in the equivalence set of S. Closeness of D and D\\u2019 is rewarded by the triplet loss, implementing C3a. To implement C3b, a \\u2018negative pair\\u2019 (D, D\\u201d), for which closeness is penalized by the loss, is formed as follows. D\\u201d is the closest vector in B to D that is derived from a sentence S\\u201d that is not equivalent to S. \\nC7. 2-D t-SNE plots (seem to) show that relative to the original ELMo embeddings, the transformed embeddings cluster better by POS (Fig. 3). (No quantitative measure of this is provided, and the two plots are not easy to distinguish.)\\nC8. Pairs of closest ELMo vectors share syntactic (dependency parse) properties to a greater degree after transformation than before (Table 1). To check that this goes beyond merely POS-based closeness, the syntactic relations that least determine POS are examined separately, and the result remains. Furthermore, the proportion of pairs of closest vectors that are embeddings of the same word (in different contexts) drops from 77.6% to 27.4%, showing that the transformation reduces the influence of lexical-semantic similarity. Similar results hold for BERT embeddings, but to a lesser degree, so the paper focusses on ELMo. \\nC9. Few-shot parsing. Two dependency parsers are trained, one on ELMo embeddings, the other on their transformations (under the proposed method). In the small-data regime (less than 200 training examples), the transformed embeddings yield higher parser performance, even when the encoding size of the ELMo embeddings is reduced (from 2048 to 75) to match that of the transformed embeddings by either PCA or a learned linear mapping. (Fig. 4)\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors state a clear hypothesis: it is possible to extract syntactic information from contextualized word vectors in an unsupervised manner. The method of creating syntactically equivalent (but semantically different) sentences is indeed interesting on its own. Experiments do support the main hypothesis -- the distilled embeddings are stronger in syntactic tasks than the default contextualized vectors. The authors provide the code for ease of reproducibility which is nice.\\n\\nThere is a short literature review, but I am wondering if something similar was done for static word embeddings. I understand that they are obsolete these days, but on the other hand, they are better researched, so were there any attempts to disentangle syntax and semantics in the classical static word vectors?\\n\\nOverall, I have no major concerns with the paper.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n=========\\nThis paper aims to disentangle semantics and syntax in contextualized word representations. The main idea is to learn a transformation of the contexualized representations that will make two word representations to be more similar if they appear in the same syntactic context, but less similar if they appear in different syntactic contexts. The efficacy of this transformation is evaluated through cluster analysis, showing that words better organize in syntactic clusters after the transformation, and through low-resource dependency parsing. \\n\\nThe paper presents a simple approach to transform word representations and expose their syntactic information. The experiments are mostly convincing. I would like to see better motivation, more engagement with a wider range of related work, and more thorough quantitative evaluations. Another important question to address is also what kind of semantic/syntactic types of information are targeted, and how to handle the tradeoff between them, for instance for different purposes.\", \"main_comments\": \"==============\\n1. Motivation: I found the motivation for the problem understudied a bit lacking. The main motivation seems to be to disentangle semantic and syntactic information. But why should we care about that? Beyond reference to disentangling in computer vision, some more motivation would be good. The few-shot parsing is a good such motivation, although the results are a bit disappointing (see more on this below). Another possible motivation is potential applications of disentanglement in language generation. There is a line of work on style transfer also in language generation, and it seems plausible that the methodology could be applied to such tasks. \\n2. The present work is well-differentiated from work on extracting syntactic information from word representations via supervised ways, as the current work does so in an unsupervised way. I don't quite get the terminological differentiation between \\\"mapping\\\" and \\\"extracting\\\" in the introduction, but the idea is clear. \\n3. Have you considered alternative representations of word pairs besides the different of their transformations f(x)-f(y)? \\n4. I found it interesting that the word representation from BERT is the concatenation of layer 16 with the mean of all the other layers. This is motivated by Hewitt and Manning's findings, and [5] found similar results. However, the different between layer 16 and others is not that large as to warrant emphasizing it so much. Perhaps a scalar mix with fine-tuning may work better, as in [5], or another method. Have you tried other word representations? I also wonder whether it makes sense to use different layers for different parts of the triplet loss, depending on whether to emphasize syntactic vs. semantic similarity. \\n5. The introduction lays out connections to some related work, but leaves several relevant pieces missing. See examples below. \\n6. The results in 3.3 are limited but useful. The comparison with a PCA-ed and reduced representation is well thought of, because of the risk with low-resource and high dimensionality. That said, I found the gap between the proposed syntax model and the ELMo-reduced disappointingly small. Even in the LAS, it seems like the difference is very small, ~0.5, although it's hard to tell from the figure. Providing the actual numbers and a measure of statistical significance would be helpful here. \\n7. Some care should be taken to define what kind of semantics is targeted here. In several cases this is \\\"lexical semantics\\\", but then we have \\\"meaning\\\" in parentheses sometimes (end of intro). Obviously, there's much more to semantics and meaning that the lexical semantics, so a short discussion of how the work views other, say compositional semantics, would be good.\", \"other_comments\": \"===============\\n1. The introduction seeks a representation that will ignore the similarity between \\\"syrup\\\" in (2) and (4). I wonder if \\\"ignoring\\\" is too strong. One may not want to lose all lexical semantic information. Moreover, the proposed triplet loss does not guarantee that information is ignored (and justly so, in my opinion). \\n2. In the example, \\\"maple\\\" and \\\"neural\\\" are said to be syntactically similar, although \\\"maple syrup\\\" is a noun compound while \\\"neural networks\\\" is an adjective-noun. Shouldn't they be treated differently then? Unless the notion of syntax is more narrow and just looks at unlabeled dependency arcs. \\n3. Some experimental choices are left unexplained, such as k=6 (section 2.1) or mapping to 27 dims (section 2.3); these two seem potentially important. \\n4. Section 2.3: do you also back-prop back into the BERT/ELMo model weights? \\n5. The dataset statistics in section 3 do not match those in section 2.2. Please clarify. \\n6. The qualitative cluster analysis via t-SNE (3.1) is compelling. It could be made stronger by reporting quantitative clustering statistics such as cluster purity before and after transformation. \\n7. In the examples showin in 3.1, it would be good to give also the nearest neighbor before the transformation for comparison. \\n8. The quantitative results in 3.2 convey the point convincingly. It's good to see also the lexical match measure going down. The random baseline is also a good sanity check to have. It would be good to provide full results with BERT, at least in the appendix and at least for section 3.2, maybe also for 3.3.\\n9. More related work: \\n+ Work that injects syntactic information into word representations in a supervised way, such as [1,2]\\n+ Work that shows that word embeddings contain different kinds of information (syntactic/semantic), and propose simle linear transformations to uncover them. \\n+ Engaging with the literature on style transfer in language generation would be good, as mentioned above for motivation, but also to situate this work w.r.t to related style transfer work. \\n+ Another line of work that may be mentioned is the variety of papers trying to extract syntactic information from contextualized word representations, such as constructing trees from attention weights. There were a few such papers in BlackboxNLP 2018 and 2019. \\n\\nTypos, phrasing, formatting, etc.:\\n============================\\n- Abstract: a various of semantic... task -> various semantic... tasks; use metric-learning approach -> use a metric-learning approach; in few-shot parsing setting -> in a few-shot parsing setting\\n- Wilcox et al. does not have a year\\n- Introduction: few-shots parsing -> few-shot parsing\\n- Method: extract vectors -> extracts vectors; Operativly -> Operatively \\n- Section 3: should encourages -> should encourage; a few-shots settings -> a few-shot setting\\n- 3.2: -- was not rendered properly\\n- 3.3: matrix that reduce -> reduces \\n\\n\\nReferences\\n==========\\n[1] Levy and Goldberg. 2014. Dependency-Based Word Embeddings\\n[2] Bansal et al. 2014. Tailoring Continuous Word Representations for Dependency Parsing\\n[3] Artetxe et al. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation\\n[4] Tenney et al. 2019. BERT Rediscovers the Classical NLP Pipeline\\n[5] Liu et al. 2019. Linguistic Knowledge and Transferability of Contextual Representations\"}"
]
} |
H1eRYxHYPB | Optimal Unsupervised Domain Translation | [
"Emmanuel de Bézenac",
"Ibrahim Ayed",
"Patrick Gallinari"
] | Unsupervised Domain Translation~(UDT) consists in finding meaningful correspondences between two domains, without access to explicit pairings between them. Following the seminal work of \textit{CycleGAN}, many variants and extensions of this model have been applied successfully to a wide range of applications. However, these methods remain poorly understood, and lack convincing theoretical guarantees. In this work, we define UDT in a rigorous, non-ambiguous manner, explore the implicit biases present in the approach and demonstrate the limits of theses approaches. Specifically, we show that mappings produced by these methods are biased towards \textit{low energy} transformations, leading us to cast UDT into an Optimal Transport~(OT) framework by making this implicit bias explicit. This not only allows us to provide theoretical guarantees for existing methods, but also to solve UDT problems where previous methods fail. Finally, making the link between the dynamic formulation of OT and CycleGAN, we propose a simple approach to solve UDT, and illustrate its properties in two distinct settings. | [
"Unsupervised Domain Translation",
"CycleGAN",
"Optimal Transport"
] | Reject | https://openreview.net/pdf?id=H1eRYxHYPB | https://openreview.net/forum?id=H1eRYxHYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"IxJJVr5g9W",
"SJe7q1o3or",
"r1lBXtcnsS",
"HkeRpB93oB",
"HygIgX93iH",
"r1xMmonntS",
"rkgyzs7iKB",
"SJg1jSWvtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749479,
1573855115116,
1573853468787,
1573852614372,
1573851886463,
1571765018435,
1571662598962,
1571390870774
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2453/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2453/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2453/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2453/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2453/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2453/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2453/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper examines the problem of unsupervised domain translation. It poses the problem in a rigorous way for the first time and examines the shortcomings of existing CycleGAN-based methods. Then the authors propose to consider the problem through the lens of Optimal Transport theory and formulate a practical algorithm.\\n\\nThe reviewers agree that the paper addresses an important problem, brings clarity to existing methods, and proposes an interesting approach / algorithm, and is well-written. However, there was a shared concern about whether the new approach just moves the complexity elsewhere (into the design of the cost function). The authors claim to have addressed in the rebuttal by adding an extra experiment, but the reviewers remained unconvinced.\\n\\nBased on the reviewer discussion, I recommend rejection at this time, but look forward to seeing the revised paper at a future venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Summary describing main changes and additional experiments\", \"comment\": \"First of all, we would like to kindly thank the reviewers, who have all taken the time to give us useful feedback on the paper by writing detailed comments about different aspects of our work. We have answered each reviewer individually and will outline here the main changes in the revised version of our work:\", \"to_summarize\": [\"An issue pointed out by the reviewers was that the experiments did not demonstrate the generic nature of the approach, by providing examples of useful cost functions. An additional experiment demonstrating the advantage of our approach in the context of biased datasets has been added,\", \"Additional explanations for specific parts of the paper (e.g. computation of the inverse) will be added,\", \"References describing related work provided by the reviewers will have been and will be added,\", \"Detailed citations, describing the exact position in the books have be added,\", \"Corrections for typos and other small errors will be provided.\"]}",
"{\"title\": \"Answer to Review 1\", \"comment\": \"Dear reviewer, thank you for your review. In the following, we try to alleviate some of your concerns about the practicality and relevance of our approach.\", \"clarification_on_the_contribution\": \"\\u201cYet some parts seem\\u2026\\u201d\\n\\nWe believe one of our main contributions in this paper is precisely to analyze the empirical behavior of CycleGAN like models and understand why and how they work despite the ill-posedness of the task as theoretically defined. As far as we know, we are the first to point the link with low energy transformations and to formalize the phenomenon with OT, even though, as stated in the related work section, and as you have rightfully remarked, some similar ideas already exist in the literature [Galanti et al. (2018); Benaim et al. (2018)]. The reviewer's references on similar work are also quoted in to the updated version of the paper, but none of those actually provides well-posedness, theoretical or empirical, in the general case. Our formalization also has the advantage of providing a natural and flexible regularization which doesn't depend on the architecture or on additional assumptions and which enables us to construct a solution for any given UDT task, not only those involving images.\", \"comparison_with_vanilla_cyclegan_and_toy_digit_swap_example\": \"\\u201cNotably, in the example\\u2026\\u201d\", \"there_are_a_few_points_to_be_addressed_here\": \"-Giving position and class labels is indeed a form of supervision but isn't equivalent to giving pairings, as there still is the \\u201cstyle\\u201d of the number, which is not given as information which we would want to be conserved. Our use of OT allows to take into account this strictly weaker form of supervision. Our specific tasks were probably not described with enough detail, this will be updated in the revised version. \\n\\n-This digit swap example task is a toy one: the goal was to show that our model is more expressive than Vanilla CycleGAN and that, given additional information about the solved task, a suitable cost can be constructed. \\n\\n-However, in general, this is still challenging and some form of supervision is needed in order to do so: This is the meaning of proposition 2 which states that an approach can only be built given a precise specification of the task.\\n \\n-In particular, the setting with a few pairings is still ill-posed: without additional prior or information, there still would be an infinite (in the continuous case) or hyper-exponential (in the finite discrete case) number of possible mappings for the non paired samples.\", \"regarding_the_choice_of_a_suitable_cost\": \"-As we show in the CelebA experiment, explicitly enforcing a quadratic cost makes the model more robust to the choice of hyper-parameters and there is less variance in the calculated mappings between different training runs. This can be of importance when the training resources are limited. It has to be noted that the quadratic cost already covers the most common situations and in particular all those covered by the CycleGAN model.\\n\\n- We add to the updated version of the paper another image experiment with a cost over color histograms where the color palette of images is minimally transported instead of their L2 norm. The results show that by using a well chosen cost, we are able to avert the consequences of a bias about hair color in the datasets while the L2 biased mapping fails to do so. This is a good example of how different costs can be leveraged for different tasks (even though quadratic costs are already quite useful in practice) to inject prior knowledge. Looking for other practical applications is then another endeavour and one we will be working on in the future.\\n\\n-There can be many other problems where different costs could be used in non image areas: for distributions of measures, divergences can be used as ground cost; for distributions of matrices costs over eigenvalues could be more relevant while in medical imaging a specific cost would have to be tailored by field experts, to give just a few examples. The power of the OT approach is that the ground cost can be taken from any cost family as long as it verifies the Twist condition.\", \"relevance_of_the_dynamical_formulation\": \"\\u201cAs the dynamical formulation is known\\u2026\\u201d\\n\\nIndeed, the dynamical formulation is used in many other areas and is one of the main causes for the vitality of the OT field. It seemed to be the most straightforward way to generalize CycleGAN given the residual architecture of the mappings in the original paper and gave us reasonably good results but, obviously, other known methods can also be used. There are also many other possibilities to improve this model and extend it to other applications: Entropy and physical regularizations, multi-marginal OT, unbalanced OT,... Looking for concrete applications and for the appropriate specific models to tackle them is part of our current projects and we see this work as a first important exploratory step towards this objective.\"}",
"{\"title\": \"Answer to Review 3\", \"comment\": \"Dear reviewer, thank you for the extensive review. We will try in the following to address your concerns point by point.\", \"more_precise_references\": \"\\u201cIt should be fair to say somewhere that in Zhu et al. (2017a)...\\u201d\\n\\nZhu et al. have indeed mentioned some limits and this will be added to the Related Work section of the updated version of the paper.\\n\\n\\u201cAs said before, you should give the exact reference of Theorem 1\\u2026\\u201d\\n\\nThe exact references for Theorem 1 can be found in [Remark 1.24. + Theorem 1.22, Santambrogio], or [Prop 10.38, Villani]. Note that this result can be generalized in the dynamical formulation, where one posits a cost function for each time t, the twist condition assumption on the \\u201cinstantaneous cost\\u201d also provides optimality and uniqueness [Prop 10.15, Villani].\", \"computation_of_the_inverse\": \"\\u201cI think that there should be a paragraph for the computation of the inverse\\u2026\\u201d\\n\\nComputation of the inverse is briefly discussed in the appendix, section A.3, however we will provide a more thorough explanation in the revised version of the paper. It consists in solving the differential equation $\\\\dfrac{dx(t)}{dt} = - v(x(t))$ starting from a sample y in $\\\\beta$. This can be intuitively understood as starting from y, and taking the path given by opposite direction $-v(x(t))$ for each point in time $ t \\\\in [0, T]$. In the same way as for the forward mapping, this equation is solved by making use of temporal discretization schemes, e.g. the forward Euler method, which gives us the inverse $x_0$, using $x_{k+1} = x_k - v(x_k)$, with $x_0=y$ sampled from $\\\\beta$, as opposed to $x_{k+1} = x_k + v(x_k)$, starting with $x_0 = x$ sampled from $\\\\alpha$. Note that other methods for computing the inverse can also be used, which are also based on the dynamical viewpoint of residual networks [Chang, Behrmann], and that those procedures are only done at inference (the backward mapping is not trained nor used during training).\", \"design_and_sensitivity_of_the_cost_function\": \"\\u201cEnd of Section 3.1. The design of the cost function is left open\\u2026\\u201d\\n\\n\\u201cEnd of Section 4.1. As said before\\u2026\\u201d\\n\\nIn the case of images, any differentiable distance or similarity based could be used, e.g. SSIM. \\nAs we show in the CelebA experiment, explicitly enforcing a quadratic cost makes the model more robust to the choice of hyper-parameters and there is less variance in the calculated mappings between different training runs. This can be of importance when the training resources are limited. It has to be noted that the quadratic cost already covers the most common situations and in particular all those covered by the CycleGAN model.\\n\\nMoreover, we add to the updated version of the paper another image experiment with a cost over color histograms where the color palette of images is minimally transported instead of their L2 norm. The results show that, by using a well chosen cost, we are able to avert the consequences of a bias about hair color in the datasets while the L2 biased mapping fails to do so. This is a good example of how different costs can be leveraged for different tasks (even though quadratic costs are already quite useful in practice) to inject prior knowledge. \\n\\nAlthough these distances are typical for image domain translation tasks, our formulation provides theoretical guarantees for any cost function in other domains as it allows us to transport any mathematical object for which we can construct a differentiable cost, and is not restricted to images. For example, we could also transport (distributions of) probability measures, and in this case a natural ground cost function could be the KL divergence, or the Jensen Shannon divergence. We are currently working on projects with more practical applications and, if you have any ideas, we would be happy to discuss them.\", \"minor_issues\": \"\\\"The notation $T^{\\\\alpha-a.s.}$ is difficult to read and should be explained.\\\"\\n\\nFor two functions f,g, $f =^{\\\\alpha-a.s.} g$ is equivalent to $f(x) = g(x), \\\\forall x \\\\in \\\\text{support}(\\\\alpha)$. We will provide this definition in the updated version of the paper.\\n\\n\\\"Please give a reference for the dynamical formulation of OT. \\\"\\n\\n[Chapter 4 & 5, Santambrogio] is our starting point and provide a rigorous and clear exposition. This will be mentioned in the updated version.\", \"references\": \"[Villani]: https://cedricvillani.org/sites/dev/files/old_images/2012/08/preprint-1.pdf\\n[Santambrogio]: https://www.math.u-psud.fr/~filippo/OTAM-cvgmt.pdf\\n[Chang] : https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16517\\n[Behrmann]: http://proceedings.mlr.press/v97/behrmann19a.html\"}",
"{\"title\": \"Answer to Review 2\", \"comment\": \"Dear reviewer, thank you for having taken the time to read our work and for your detailed review. In the following we will try to address your comments and questions point by point, do not hesitate to respond, we would be interested in having your feedback.\", \"notion_of_desirable_mapping\": \"\\u201c- While most definitions are rather intuitive\\u2026\\u201d\\u201d\", \"your_remark_is_very_relevant\": \"It is indeed difficult in general to define what a *desirable* mapping is. Ideally, the set of desirable mappings we used in the definition would be defined through some quantitative criterion. For example, in the image translation experiments (section 4.2), a desirable mapping should preserve as much information as possible about the face being transformed (hence the relevance of the quadratic constraint). However, there are many situations where it is difficult to construct a precise one and, in practice, practitioners try different mappings until a qualitatively satisfying one is found. In all generality, it is difficult to have a general constructive solution: in Prop 2, Section 2.2, we have shown that there is no algorithm that can solve all UDT Tasks without additional information. This is precisely the strongest motivation for the use of OT: If the user knows how to characterize precisely the mappings solving his UDT task he can find a suitable cost which we know must exist for any given UDT task as we prove in Prop. 3 (with mild additional assumptions). If the characterization is more vague, it becomes more difficult to construct a cost for the task and there has to be some trial and error as for any modelling problem.\", \"importance_of_invertible_mappings\": \"\\u201c-I see invertibility...\\u201d\\n\\nFor deterministic mappings (like CycleGAN), if invertibility is not verified on the support of the domains, there is either creation or destruction of mass. In other words, a coherent map as defined in section 1 of the paper is invertible in general which means that invertibility doesn\\u2019t really further constrain the solution. We chose to keep this redundancy in order to stay close from the CycleGAN formulation and to stress the fact that we are also looking for a backward mapping which is the inverse of the forward one when we are solving a UDT task.\", \"architecture_choice\": \"\\u201c- However, the paper needs to emphasize\\u2026\\u201d\", \"we_studied_the_resnet_architecture_in_particular_for_two_reasons\": \"It is the most used one and the similarity of its structure to the discretization of ODEs makes its analysis easier and its generalization to dynamical OT more natural. However, the implicit bias we show is also present in other architectures, e.g. the Unet with its skip connections: The results produced by \\u201cgood\\u201d CycleGAN-like models always seem to have the property of preserving the structure of the input which means that the transformation is a low-energy one.\", \"cost_definition\": \"\\u201c- The main problem that remains unsolved\\u2026\\u201d\\n\\nWe do agree that constructing a cost is challenging in general, as we wrote in the paper. However, we think that, while our main focus has been on explaining the empirical success of CycleGAN, the insights we gained through this analysis do have some practical significance:\\n\\n-As we show in the CelebA experiment, explicitly enforcing a quadratic cost makes the model more robust to the choice of hyper-parameters and there is less variance in the calculated mappings between different training runs. This can be of importance when the training resources are limited. It has to be noted that the quadratic cost already covers the most common situations and in particular all those covered by the CycleGAN model.\\n\\n-In some situations, the implicit low-energy biais present in CycleGAN might not be the right one when we would like to transport different mathematical objects: e.g. if we were to transport matrices, one would preferably consider minimally displacing the eigenvectors instead of minimally displacing each component as would be done in a trivial extension of CycleGAN. Our approach allows to handle those situations, provided that enough is known to construct a cost; otherwise trial and error is needed but it has to be noted that ground costs can be taken from a large family of functions as the Twist condition is the only necessary property.\\n\\n\\u201c- While experiments support the main claims of the paper\\u2026\\u201d\\n\\n- We add to the updated version of the paper another image experiment with a cost over color histograms where the color palette of images is minimally transported instead of their L2 norm. The results show that by using a well chosen cost, we are able to avert the consequences of a bias about hair color in the datasets while the L2 biased mapping fails to do so. This is a good example of how different costs can be leveraged for different tasks (even though quadratic costs are already quite useful in practice) to inject prior knowledge. Looking for other practical applications is then another endeavour and one we will be working on in the future.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"Summary:\", \"The paper addresses the ill-posedness of the unsupervised domain translation (UDT) problem. It provides a more structured and rigorous problem definition than previous works (mainly CycleGAN-based), and proposes the theory of optimal transport (OT) as a better framework for solving UDT. The paper provides an interesting link between a dynamical formulation of OT and residual networks, which leads to a practical algorithm for solving OT/UDT. Experiments highlight two main points: 1) CycleGAN are biased towards learning nearly identity mappings, and 2) the OT formulation allows for modelling explicit biases in the learned solution through the design of the cost function.\", \"Strengths & Weaknesses:\", \"The paper addresses an important problem, which as far as I know, is widely known but not properly or explicitly addressed in prior work.\", \"While most definitions are rather intuitive, some are still vague so they cannot be constructive. For example, a UDT task is a subset of all possible mappings which are *desirable* for the given task, but it is not clear how we can exactly define *desirable* mappings.\", \"In addition, it is not clear why the set of all mappings X_{alpha,beta} needs to be constrained to invertible mappings. I see invertibility as only a constraint added by CycleGAN to limit the set of possible learned mappings.\", \"The paper makes an interesting observation that CycleGAN is biased towards simple, and nearly identity mappings (which I believe is the main consequence of small initialization values), which could explain its practical success.\", \"However, the paper needs to emphasize that this is particularly tied to the choice of resnet architectures that is commonly used.\", \"I like the proposed dynamical formulation for solving OT and the link to resnets, which provides an interesting practical algorithm.\", \"The main problem that remains unsolved is how to choose the cost function $c$. The paper acknowledges that, and proposes a specific cost functions for the specific tasks of the experimental section.\", \"While experiments support the main claims of the paper, they are still quite limited and do not really have a clear practical significance. The paper would have been much stronger if the proposed approach solves a more practical problem.\", \"In conclusion, while I think that the practical significance of the proposed approach is rather limited, I think that overall it makes an interesting contribution to the domain of UDT which can be useful for future work.\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper revisits unsupervised domain translation (UDT) in light of optimal transport. The paper shows that CycleGAN-like models are ill-posed. It redefines UDT tasks using an additional set of suitable mappings. Then the paper redefines UDT problems in the optimal transport framework. Last it proposes an approach to solve UDT problems based on the dynamical formulation of optimal transport. Experiments support the proposed approach.\", \"UDT is a relevant and up-to date problem. The paper helps to clarify some shortcomings of previous approaches and proposes a new solution. The paper is well written. Therefore, in my opinion, the paper should be accepted to ICLR. But, as I am not expert in optimal transport, I would like to have the exact reference of Theorem 1 because I would like to be sure that, in the proof of Proposition 3, the optimum of (A_c) is unique and therefore also satisfies the first item of Theorem 1.\", \"Detailed comments.\", \"It should be fair to say somewhere that in Zhu et al. (2017a) limits of the approach were already mentioned\", \"As said before, you should give the exact reference of Theorem 1: which Theorem in Santambrogio (2015). In the proof of proposition 3, you should explain why the minimum of (A_c) is unique and thus corresponds to the minimum in Theorem 1.\", \"End of Section 3.1. The design of the cost function is left open. This should be made explicit and be discussed somewhere, perhaps in the conclusion.\", \"I think that there should be a paragraph for the computation of the inverse. This question is considered in different parts of the paper. See for instance the caption of Figure 5. What is the meaning of \\\"inverting the forward network\\\" and to which part of the text does it refer?\", \"End of Section 4.1. As said before, the design of the cost function is sensitive. Did you have any idea of other cost who would allow to learn the targeted translation without using internal representations?\", \"Typos.\", \"The notation $T^{\\\\alpha-a.s.} is difficult to read and should be explained\", \"Beginning of Section 3. \\\"based the dynamical formulation of PT\\\"\", \"Please check references in texts. such as \\\"from OT theory Santambrogio (2015)\\\"\", \"Beginning of Section 3.2., \\\"calculate the retrieve the OT mappings\\\"\", \"Please give a reference for the dynamical formulation of OT.\"]}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an analysis of CycleGAN methods. Observing that the class of solutions (mappings) covered by those methods is very large, they propose to restrict the set of solutions by looking at low energy mappings (energy being defined wrt. a dedicated cost). A natural formulation of the associated problem is found in optimal transport (OT) theory. They examine the underlying problem in its dynamical formulation, for which a direct connection can be made with ResNet architecture that are commonly used in cycleGANs. They illustrate these results on simple examples, involving pairing swapped digits from MNIST and celebA male to female examples. As a matter of facts, results presented with the OT formulation are more constant. The main proposition of the paper is that the task at hand can be efficiently coded through the distance (cost) function of OT.\\n\\nOverall the paper is well written and the proposition is reasonable. Yet some parts seem unnecessary long to me, or bring little information, notably the formalisation of 2.1 and 2.2. The fact that cycleGANs are severely ill-posed problems is well known from the computer vision community. Variants that can include a few paired samples can be found (not exhaustive): \\nTripathy, S., Kannala, J., & Rahtu, E. (2018, December). Learning image-to-image translation using paired and unpaired training samples. In Asian Conference on Computer Vision (pp. 51-66). Springer, Cham.\", \"oe_that_try_to_regularize_the_associated_flow\": \"\", \"dlow\": \"Domain Flow for Adaptation and Generalization Rui Gong, Wen Li, Yuhua Chen, Luc Van Gool; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2477-2486\\n\\nIn this spirit, I wonder if a comparison with only the vanilla cycleGAN is sufficient to really assess the interest of using the OT formulation of the problem. Notably, in the example of digit swaps, a cost is learnt by finding a representation of the digits that eliminates the importance of position in the representation. Training such a classifier assumes having some labelled data, that could theoretically be paired, and thus making amenable variants of cycleGans that use a few paired samples. In this sense, I think that the paper fails in giving convincing arguments that advocate the use of OT here. As the dynamical formulation is known and already used to learn mappings ( see Trigila, G., & Tabak, E. G. (2016). Data\\u2010driven optimal transport. Communications on Pure and Applied Mathematics, 69(4), 613-648. For instance). Also variants of OT that estimate a Monge mapping could have been included (e.g. V. Seguy, B. B. Damodaran, R. Flamary, N. Courty, A. Rolet, M. Blondel, Large-Scale Optimal Transport and Mapping Estimation, International Conference on Learning Representations (ICLR), 2018.)\", \"as_a_summary\": \"\", \"pros\": \"A nice interpretation of CycleGAN with OT\\nThe paper is fairly well written\", \"cons\": \"Overall the quantity of novelties is, in the eyes of the reviewer, somehow limited. At least the contributions should be clarified;\\nThe experimental section is not convincing in explaining why the OT formulation is better than variants of cycleGAN or also other schemes for computing OT than the dynamical formulation\", \"minor_remark\": \"A reference to Benamou, Brenier 2000 could have been given regarding section 3.2 and the dynamical formulation of OT.\"}"
]
} |
rkg6FgrtPB | Biologically Plausible Neural Networks via Evolutionary Dynamics and Dopaminergic Plasticity | [
"Sruthi Gorantla",
"Anand Louis",
"Christos H. Papadimitriou",
"Santosh Vempala",
"Naganand Yadati"
] | Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain. Here we propose that backpropagation can happen in evolutionary time, instead of lifetime, in what we call neural net evolution (NNE). In NNE the weights of the links of the neural net are sparse linear functions of the animal's genes, where each gene has two alleles, 0 and 1. In each generation, a population is generated at random based on current allele frequencies, and it is tested in the learning task. The relative performance of the two alleles of each gene over the whole population is determined, and the allele frequencies are updated via the standard population genetics equations for the weak selection regime. We prove that, under assumptions, NNE succeeds in learning simple labeling functions with high probability, and with polynomially many generations and individuals per generation. We test the NNE concept, with only one hidden layer, on MNIST with encouraging results. Finally, we explore a further version of biologically plausible ANNs inspired by the recent discovery in animals of dopaminergic plasticity: the increase of the strength of a synapse that fired if dopamine was released soon after the firing. | [
"Biological plausibility",
"dopaminergic plasticity",
"allele frequency",
"neural net evolution"
] | Reject | https://openreview.net/pdf?id=rkg6FgrtPB | https://openreview.net/forum?id=rkg6FgrtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"z0WprZhpD",
"Sygu74JooS",
"rygLcMyojS",
"S1l46g1ojr",
"rygnNYcCFB",
"ByedtDasKS",
"S1eZ0ExCuH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749451,
1573741599550,
1573741197819,
1573740731739,
1571887412353,
1571702656119,
1570796745343
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2451/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2451/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2451/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2451/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Unfortunately the paper is confusingly written, and there is only agreement by all reviewers on the rejection of the paper. Indeed, if all reviewers and the area chair do not interpret the paper well, the authors' best response would be to rewrite the papers rather than disagree with all reviewers.\\n\\nIn the area chair's opinion, the current form the paper does not merit publication. The authors are advised to address the reviewers' concerns, rework the paper, and submit to a conference again.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank you for your valuable comments. We provide the answers below for the concerns raised in the review.\\n \\n(1)\\nThe point of this paper is not to produce a new machine learning framework for classification tasks, but to understand how the animal brain could work, by studying biologically plausible neural networks under the lens of machine learning. \\n\\n(2)\\nOur approach doesn\\u2019t always have to be used for a linear classifier NNE_x(y) = x^T W y, the NNE could be any function (see Lemma 1 and Theorem 1). We analyze the linear NNE in Theorem 2 to show that for any linear target function, NNE converges to an allele distribution arbitrarily close to the true labelling function, with high probability. The parameters that are optimized during the training process of NNE are the allele probabilities p. x is sampled from p as mentioned in section 2, paragraph 1. After sampling x, we use the weight generator matrix W of each layer to generate weights Wx of that layer of NNE. Updates to the allele probabilities p indirectly update the weights of all the layers of NNE simultaneously. Hence, although in linear case, NNE formulation looks similar to linear regression, in multilayer NNE, it\\u2019s not. \\n\\n(3)\\nThe way the probabilities are updated is given in equation 3, this is consistent with the weak selection regime as mentioned in section 1, paragraph 3 of our paper. While there is not explicit back-propagation, our point is that weak selection is implicitly performing something similar in spirit (this is not vanilla back-propagation since the weights of the links in the networks cannot be updated directly). \\n\\n(4)\\nIn lemma 1 we show that the performance of the allele, f, is in fact a function of the gradient of the loss function. In gradient descent-based optimization, it is well known that the decrease in the value of loss function after each iteration is proportional to the squared norm of the gradient (for e.g., see section 9.3 in [1]). Theorem 1 says that, given the update rule of the allele probabilities in equation (3), something similar holds for NNE! By choosing a small enough learning rate, the expected decrease in the loss at generation t+1 is proportional to the squared norm of the gradient at generation t taken at the coordinates of the allele distribution which are far from 0 or 1. \\n\\n(5)\\nBeta defines the sparsity of each weight in the weight generation matrix. We want each weight w_ij of the network to be a sparse random combination of the genes x so that update to an allele probability p(i) affects only a small number of synapses in the network that depend on x(i).\", \"references\": \"[1] Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, New York, NY, USA.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank you for your valuable comments. We provide the answers below for the concerns raised in the review:\\n\\nIn this paper, we propose two approaches to understand the process of learning in animal brain, that opens up the exciting possibility of improved learning using insights from evolution and the brain; 1) NNE (Neural Net Evolution), inspired by the standard update rule in population genetics, that succeeds in creating neural networks that perform simple labelling tasks with modest but promising preliminary results, both theoretical and experimental, suggesting that the neural networks in animal brain could have evolved, 2) DNN (Dopaminergic Neural Network) that is inspired from the recently established results on dopaminergic plasticity, which also give promising results on the classification tasks, supporting the plausibility of plasticity based updates in animal brains. \\n \\nThe approaches used in this paper are consistent with evolution in genetics, and we provide explanations and corrections to the minor issues raised in the review. Nevertheless, these issues don\\u2019t seem significant enough to hinder the understanding of the methods proposed. \\n\\n(1)\\nSTDP is an acronym for Spike-Time Dependent Plasticity. This appears in all the relevant works cited in the introduction section of the paper. \\n\\n(2)\\nWe consider a simple case when the alleles are 0 and 1. In case of more than 2 possible alleles, our method can be easily extended by appropriately encoding the alleles. \\n\\n(3)\\nWe define gene as a single bit of information with two alleles 0 and 1 (section 1, paragraph 3), and genotype as a string of alleles (section 2, paragraph 1); both these definitions are consistent with those of genomics. We don\\u2019t use the term phenotype explicitly in our method. \\n\\n(4)\\nAs mentioned in section 1, paragraph 3, each gene has two alleles 1 or 0. At each generation, we fix the allele probabilities. Hence, the probability of a gene i having allele 1 for the genotypes of that generation is fixed. This sentence shall be updated in the revised version of the paper to remove the ambiguity. \\n\\n(5)\\nAs mentioned in section 2, paragraph 1, n is the number of genes for each genotype. Hence, each genotype is a binary vector of size n, i.e, x \\\\in {0,1}^n. \\n\\n(6)\\nThis is a minor typo. We correct it to y ~ D everywhere for consistency. \\n\\n(7)\\nThis is a minor typo. We correct it to L(NNE_x(t), \\\\ell(y)). \\n\\n(8)\\np^t is the allele probability distribution at generation t. We will add some explanation to make this clear. \\n\\n(9)\\nAs mentioned in the sentence after equation (3), \\u201cThe multiplier \\\\epsilon captures the small degree to which the performance of this task by the animal confers an evolutionary advantage leading to larger progeny.\\u201d \\\\epsilon can be viewed as the \\u201clearning rate\\u201d. \\n\\n(10)\\nWe have provided exact reference to this sentence multiple times in the introduction - [Burger (2000); Chastain et al. (2014)] in section 1 paragraph 1 and 3. We also briefly explain in section 1 paragraph 3 what \\u201cweak selection\\u201d means. \\n\\n(11)\\nThis is a minor typo. \\n\\n(12)\\n\\\\gamma is implicitly defined and can be calculated from the last equality in the first equation array on page 4. \\n\\n(13)\\nd is defined in the proof of theorem 2 to be the size of the set J, which is again defined in the proof of theorem 2.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank you for your valuable comments. We provide the answers below for the concerns raised in the review.\\n\\n(1)\\nIn section 2 we prove that, if the classifier to be learned is linear, then evolution indeed succeeds in creating neural network that classifies well, i.e., we show that the necessary updates to synapses, to perform the classification task, could happen in light of evolution. Our experimental results in table 1 on MNIST support this. We are not suggesting that the evolutionary mechanism is employed during a lifetime, only part of it is, to elicit feedback. Our paper takes the approach that evolutionary dynamics inspires a new type of ANN. \\n\\n(2)\\nAs mentioned in page 2 paragraph 4 in our submission, one of the recent investigations [Yagishita et al., 2014)] reveals that the release of dopamine affects the structural plasticity of certain synapses, within a narrow period of time after the synapse\\u2019s firing. Since the proposed DNN implements a very similar mechanism \\u2013 the release of dopamine is captured by the favorable outcome (via error) and the synaptic update is a function of this favorable outcome \\u2013 it could be considered biologically plausible.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper argues that Artificial Neural Network (ANN) lack in biological plausibility because of the back-propagation process. Therefore, the authors provide an alternative approach, named neural net evolution (NNE) that follows evolutionary theory. This approach uses a large number of genotypes (in the form of vector with binary logits) that will evolve overtime during training. It does not require to calculate the gradient explicitly. The authors have conducted some experiments on MNIST using ANN with only one hidden layer. The experimental results show that the NNE can learn the classification task reasonably well considering that no explicit back propagation is used.\\n\\nI think overall the motivation to combine ANN with evolutionary theory is very interesting. The reviewer is not very familiar with evolutionary theory. So I judge this paper in the perspective of machine learning, from which I think the current approach is a week variant of back-propagation that still relies on gradient (see detailed comments below). Based on this, I give my rating. \\n\\nThe approach is formulated as NNE_x(y) = (x^T)*(W^T)*y. In traditional linear regression, W is the weight to be learnt. In this paper's formulation, W is named as a weight generation matrix, which is choosing to be random and i.i.d. with certain probabilities. The parameters to be optimized is x, which is named as a genotype that is viewed as a vector x \\\\in {0, 1}^n. So first of all, as W is fixed so the formulation is very similar to a traditional linear regression with an additional linear transform. The difference is that x is a binary vector with probabilities. These probabilities are optimized over time. \\n\\nFrom the Equations 1), 2) and 3), the probabilities are updated in a way to minimize the loss. This is kind of similar to back-propagation. Then the probabilities are updated and thus x is changed as well. In my understanding, this is still gradient-based optimization. I do not see it fundamental different to back-propagation. This is my main concern about this work. \\n\\nI did not check the details of Theorem 1. Could the authors please comment what is the purpose of Theorem 1 before proving it? This part is unclear to me in this paper. \\n\\nOne more question, for the W matrix, the authors choice beta = 0.0025 in the experiment. Is there any particular reason for this choice? Or does it matter what value to choice as it is fixed anyway?\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper the authors propose a method for training neural networks using evolutionary methods. The aim of developing this method is to provide a biological alternative to back-propagation. The authors prove that their method converges and with high probability succeeds in learning linear classification problems. Another method is also proposed which is linked to dopaminergic neurons.\\n\\nIn terms of presentation, the paper is generally clear and well-written. I was not able to assess the importance of the theoretical contributions of the work as my research is not in this area, so my comments are limited to the other aspects.\\n\\nWith regard to the biological plausibility of the method, it is unclear to me how the evolutionary method proposed here can enable learning in typical scenarios such as conditioning experiments in animals. The learning processes in animals typically occurs in short time spans (for example a few training sessions for conditioning to stimuli predicting food/no food) and therefore I don\\u2019t find it plausible to suggest evolutionary methods across generations are behind such forms of learning. Perhaps what the authors have in mind applies more to other forms of behaviour such as innate and involuntary responses in animals formed across generations rather than ongoing updates in synaptic plasticity as an animal adjusts its behaviour using environmental feedback. But then in this case the biological plausibility of the method seems fairly limited and not really an alternative to methods such as back-propagation.\\n\\nThe other biological aspect of the proposed work is the connection to dopamine and using the sign of gradients for updating the weights. I think connecting the current learning rule to the activity of dopamine neurons requires quantitative comparisons with experimental data, otherwise although I agree that the method is biologically inspired, but whether it is biologically plausible is not clear.\\n\\nBased on the above comments, I think the work will benefit from further developments before being ready for publication.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"First of all, I must confess that my knowledge is quite limited to read this paper. Perhap the authors present something that I can not catch up at the present.\\n\\nI conjecture the paper would like to bring the evolution in genetics and perhap brain cirecuits as well to define a novel neural net model, called NNE by the authors. \\nThe paper is somewhat cumbersome in the introduction that makes the reader (here is myself) can not understand the main idea. At first the authors introduce about evolution in genetics and genomics that is a bit different from what I known. Then the authors claim that they can show that the brain circuits can evolved in their model.\\nThere are so many mistake and/or typos in sentences and in mathematical formulation. These make me can not finish reading the paper. I have to stop reading at the end of Section 2.\", \"here_are_my_concerns_and_questions\": \"1). What is \\\"STDP\\\" in the 2nd papragraph in Introduction?\\n2). in the 3rd papragraph in Introduction: \\\"Suppose that the brain circuitry for a particular classi\\ufb01cation task, such as \\u201cfood/not food\\u201d,is encoded in the animal\\u2019s genes, assuming each gene to have two alleles 0 and 1\\\". This is realistics, the allels in animal is 0 1 or 2 if encoded.\\n3). The authors use the words \\\"gene, genotype, phenotype\\\" in a special way that is different to what I known in genomics (in GWAS).\\n4). in the 3rd papragraph in Introduction: \\\"At each generation, a gene is an independent binary variable with \\ufb01xed probability of 1\\\". What do you mean by fixed probability of 1? I can not understand in anysense that I know.\\n5). In Section 2, What is n? the authors start the mathematical formulation but I can not find out what is n? Is it the sample size?\\n6). In Section 2, paragraph 2, you define y~ \\\\mathcal{D}, BUT then in all formulas later you denote y ~ D. What is D??? I can not understand.\\n7). In Section 2, paragraph 2, you define a label of y as \\\\ell(y), BUT then in the 1st sentence of the 3rd paragraph in Section 2 you wrote L(NNE_x(t), l(y)). What is l(.) here ???\\n8). In the equations (1) and (2), what is p^t ??? You have NOT defined it.\\n9). Right after equations (1) and (2), What is \\\\epsilon ???? Can NOt understand.\\n10). The sentence right after the equation (3): \\\"This is the standard update rule in population genetics under the weak selection assumption.\\\" This is NOT trivial to me, and even the machine learning comunity, we do not know this rule, it is not obvious. PLEASE provide exact reference.\\n11). the first equation in the PROOF of LEMMA 1 wrong \\\\mathcal{L} (p^t) should be equal to E_{x~p^t} NOT p.\\n12). in the PROOF of Theorem 1, I can NOT find out where \\\\gamma has been defined?\\n13). What is d in Theorem 2? is it the dimension? I make too many guesses !!!\"}"
]
} |
BkghKgStPH | Continual Learning using the SHDL Framework with Skewed Replay Distributions | [
"Amarjot Singh",
"Jay McClelland"
] | Human and animals continuously acquire, adapt as well as transfer knowledge throughout their lifespan. The ability to learn continuously is crucial for the effective functioning of agents interacting with the real world and processing continuous streams of information. Continuous learning has been a long-standing challenge for neural networks as the repeated acquisition of information from non-uniform data distributions generally lead to catastrophic forgetting or interference. This work proposes a modular architecture capable of continuous acquisition of tasks while averting catastrophic forgetting. Specifically, our contributions are: (i) Efficient Architecture: a modular architecture emulating the visual cortex that can learn meaningful representations with limited labelled examples, (ii) Knowledge Retention: retention of learned knowledge via limited replay of past experiences, (iii) Forward Transfer: efficient and relatively faster learning on new tasks, and (iv) Naturally Skewed Distributions: The learning in the above-mentioned claims is performed on non-uniform data distributions which better represent the natural statistics of our ongoing experience. Several experiments that substantiate the above-mentioned claims are demonstrated on the CIFAR-100 dataset. | [
"Continual Learning",
"Catastrophic Forgetting",
"SHDL",
"CIFAR-100"
] | Reject | https://openreview.net/pdf?id=BkghKgStPH | https://openreview.net/forum?id=BkghKgStPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"sTAzO-WkbD",
"HJgg5GG59B",
"Hkgn9zNX9r",
"B1xG4i5TFH"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749421,
1572639367676,
1572188819533,
1571822378214
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2449/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2449/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2449/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper adapts a previously proposed modular deep network architecture (SHDL) for supervised learning in a continual learning setting. One problem in this setting is catastrophic forgetting. The proposed solution replays a small fraction of the data from old tasks to avoid forgetting, on top of a modular architecture that facilitates fast transfer when new tasks are added. The method is developed for image inputs and evaluated experimentally on CIFAR-100.\\n\\nThe reviews were in agreement that this paper is not ready for publication. All the reviews had concerns about the lack of explanation of the proposed solution and the experimental methods. The reviewers were concerned about the choice of metrics not being comparable or justified: Reviewer4 wanted an apples-to-apples comparison, Reviewer1 suggested the paper follow the evaluation paradigm used in earlier papers, and Reviewer2 described the absence of an explained baseline value. Two reviewers (Reviewer4 and Reviewer2) described the lack of details on the parameters, architecture, and training regime used for the experiments. The paper did not not justify which aspects of the modular system contributed to the observed performance (Reviewer4 and Reviewer1). Several additional concerns were also raised. \\n\\nThe authors did not respond to any of the concerns raised by the reviewers.\", \"title\": \"Paper Decision\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"I think there might be some interesting ideas in the work, but I think the authors somehow did not manage to position themselves well within the *recent* works on the topic or even with respect to what continual learning (CL) is understood to be in these recent works.\\n\\nE.g. CL is a generic learning problem, and most algorithms are generic (with a few caveats in terms of what information is available) in the sense that they can be applied regardless of task (be it RL, be it sequence modelling etc.). This work seems limited to image classification. The SHDL wavelets pre-processing, if I understood it, is specific for images and probably even there under some assumption (e.g natural images). \\n\\nThe autoencoder (middle bit) is trained on all tasks the CL needs to face, if I understood the work correctly (phase 0 + phase 1). This potentially makes the CL problem much simpler because you are limiting yourself to the top layer only when dealing with CL, not the rest. Not to mention that I don't understand the motivation of the autoencoder. I think ample results show that unsupervised learning fails in many instances to provide the right features and underperforms compared to learning discriminative features by just backproping from the cross entropy (discriminative loss) all the way down. The only instance I know of for doing this is in low data regime where there is no alternative. \\n\\nI think the modularity used needs to be better introduced. Why the autoencoder, why the first layer of wavelets? Is it for the benefit for CL? I can understand the wavelets, since they are not learnt. But the autoencoder? The autoencoder being trained on all data feels like a cheat. \\n\\nI think the citation of the perceptron a bit strange. Do you really use the original perceptrion from 58? Why? We have much better tools now !?\\n\\nI think the different metrics introduced are interesting and useful. Though you should somehow find common ground to existing works as well to ensure a point of comparison. In the results section I almost got lost. What is the final performance on Cifar. How does this compare to a model that is not trained in a CL regime? What loss do you get from the proposed parametrizaton?\\n\\nIn the comparison with EWC and iCarl, there the whole model was dealing with the CL problem, right? (all intermediary layers). I'm actually surprised iCarl is not doing better (I expect it can do better than EWC). Maybe provide a few more information of hyperparam used for this comparison.\\n\\nOverall I think the paper is not ready for being published. Not without addressing these points:\\n * role of modularity (if not CL -- then why? ; is the modular structure original or part of the previous works cited, e.g. where the wavelets are introduced and so forth)\\n * better integration with recent literature; provide answers and settings that allow apple to apple comparison so one can easily understand where this approach falls; if the method is not meant for this \\\"traditional settings and metrics\\\" please still provide them, and then motivate why this regime is not interesting and explain better the regime the method is meant for\\n * as it stands the work is light on the low level details; Hyper-params and other details are not carefully provided (maybe consider adding an appendix with all of these). I have doubts that the work is reproducible without these details.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The author proposed a modular SHDL with skewed replay distributions to do continual learning and demonstrated the effectiveness of their model on the CIFAR-100 dataset. They made contributions in three aspects: (1) using a computationally efficient architecture SHDL which can learn rapidly with fewer labeled examples. (2) In addition to retain previous learned knowledge, the model is able to acquire new knowledge rapidly which leads to fast learning. (3) By adopting the naturally skewed distributions, the model has the advantage of efficient memory storing.\\n\\nOverall, the paper should be rejected because \\n(1)the author spent too much space to introduce the off-the-shelf SHDL model which should be put in the Appendix or referred directly. In other words, the author should explain more details about the \\u201creplay\\u201d mechanism in their model and show the advantage of choosing SHDL rather than other deep neural nets under the continual learning paradigm. \\n(2)The comparison with other methods are too simple. When comparing with other methods, the author should introduce the parameter setting and the detailed training strategy. Otherwise, the evidence made in the experimental section is not convincing. Besides, the author should follow the evaluation paradigm used in other published papers to make a fairer comparison.\\n(3)The author should carry out more discussion about which part of their model contributes the most to the continual learning. After reading the paper thoroughly, I am still unclear about it. \\n\\nThe paper has some imprecise part, here are a few:\\n(1)The caption in Table 1 is too simple. More details should be add to explain the table. \\n(2)What is the DTSCNN in Table 1?\\n(3)What is the green connections in Figure 1?\\n(4)In the second contribution \\u201cRapid learning with forward transfer\\u201d, is the ability to \\u201cretrain\\u201d or \\u201cretain\\u201d the previous learned knowledge?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper suggest to use the previously proposed ScatterNet Hybrid Deep Learning (SHDL) network in a continual learning setting. This is motivated by the fact that the SHDL needs less supervised data, so keeping a small replay buffer can be enough to maintain performance while avoiding catastrophic forgetting.\\n\\nMy main doubt is the benchmark and evaluation of the proposed method. The metrics reported are all relative to a baseline value (which I could not find reported), and make it difficult to understand how the model is performing in absolute term. This is particularly a problem when comparing with existing state of the art method (Fig. 3, Table 4), since this does not exclude that they may have an overall much better accuracy in absolute terms.\\n\\nAlso concerning the comparison with the previous literature, I could find no details about the architecture and the training algorithm used. Notice that this may in particular affect some the reported metrics, since they depend on the shape of the training curve (reporting the training curves for all methods may also be useful). Also, since SHDL uses a small replay buffer, are EWC and the other method modified to use the replay buffer and make the comparison fair?\\n\\nWhile several standard tests for continual learning exists (for example the split CIFAR10/100 in Zenke et al., 2017), those are not used, and rather a simpler test is used which only attempt to learn continually two datasets. It would be helpful to also report a direct comparison on those tests.\", \"regarding_the_line\": \"\\\"The autoencoder is jointly trained from scratch for classes of both phases to learn mid-level features\\\", does this mean that the auto-encoder is trained using data of the two distributions at the same time rather than one after the other? If it is the former case, while it is unsupervised training, it would be a deviation from the standard continual learning framework and should clearly be stated.\"}"
]
} |
SJxstlHFPH | Differentiable Reasoning over a Virtual Knowledge Base | [
"Bhuwan Dhingra",
"Manzil Zaheer",
"Vidhisha Balachandran",
"Graham Neubig",
"Ruslan Salakhutdinov",
"William W. Cohen"
] | We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB). In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus. At each step the module uses a combination of sparse-matrix TFIDF indices and a maximum inner product search (MIPS) on a special index of contextual representations of the mentions. This module is differentiable, so the full system can be trained end-to-end using gradient based methods, starting from natural language inputs. We also describe a pretraining scheme for the contextual representation encoder by generating hard negative examples using existing knowledge bases. We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%. On HotpotQA, DrKIT leads to a 10% improvement over a BERT-based re-ranking approach to retrieving the relevant passages required to answer a question. DrKIT is also very efficient, processing up to 10-100x more queries per second than existing multi-hop systems. | [
"Question Answering",
"Multi-Hop QA",
"Deep Learning",
"Knowledge Bases",
"Information Extraction",
"Data Structures for QA"
] | Accept (Talk) | https://openreview.net/pdf?id=SJxstlHFPH | https://openreview.net/forum?id=SJxstlHFPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"Ilh9vDgaeg",
"rJe4bPFssB",
"BJl6SdgciH",
"HylgRyRYiB",
"rJl8_pTtjr",
"B1xx0h6toS",
"rkxu5o6KiS",
"H1lvMjxAFB",
"BygDHP9oKB",
"SJx1TwS5YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749386,
1573783291714,
1573681220938,
1573670856186,
1573670254093,
1573670087590,
1573669775524,
1571846927501,
1571690302952,
1571604407017
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2447/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2447/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper proposes a novel architecture for question-answering, which is trained in an end-to-end fashion.\\n\\nThe reviewers were unanimous in their vote to accept. Authors are encouraged to revise addressing reviewer comments.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you for the feedback\", \"comment\": \"I thank the authors for their responses. I'm satisfied with the rebuttal, and will stand by my positive rating.\"}",
"{\"title\": \"thanks for the feedback\", \"comment\": \"Thank you for the extra analysis, new experiments and clarifications! I decided to increase the (already positive) score.\"}",
"{\"title\": \"Great feedback\", \"comment\": \"Thanks to the authors to provide a feedback to the review. It answered the questions well, and I noticed the relevant paragraphs in the paper that incorporated them. I would like to maintain the positive assessment.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the valuable feedback. We have updated the paper draft to reflect the feedback. In particular we have added section 3.3 with new results on HotpotQA.\\n\\n1. Regarding max instead of sum:\\n\\nFor multi-hop questions, the aggregated score of an entity is used as input to a softmax function to compute Z_t, which is the input distribution over entities for the next hop. If we take a sum over all mentions, entities which appear multiple times in the retrieved set will tend to have a much higher score than those which don\\u2019t, and end up with a score close to 1 in Z_t. This prevents the model from exploring multiple relation paths, same as the case you point out in 2.\\n\\n2. Regarding the softmax temperature:\\n\\nYou are correct for the reason why lambda=4 helps for multi-hop questions. We did not consider the alternative strategy of applying softmax only on the top K elements, but we note that often the score difference between the top and the second best mentions is already high (recall that these are maximum inner product scores against a fixed index). In this case, pruning the set of entities over which the softmax is applied would not help.\\n\\n3. Regarding generalization to more hops:\\n\\nThe `DrKIT (pre, cascade)` model in Table 1 (right) is exactly this model. It is only trained on 1-hop questions, but cascaded for 2-hop and 3-hop questions as well. We find that it is significantly better than similarly cascaded versions of PIQA and DrQA, but worse than a version of DrKIT which is fine-tuned end-to-end.\\n\\n4. Regarding updating the mention embeddings:\\n\\nThis is a very interesting suggestion. However, note that our setup assumes that the mentions involved in test questions are different from those involved at training time. For WikiData, the test corpus itself is a different subset of Wikipedia, whereas for MetaQA many answers to test questions are not part of training questions. In this case, updating a subset of mention embeddings which are only relevant for the training questions could lead the model to perform worse on the mentions relevant at test time, whose embeddings are left to the pre-trained values.\\n\\nIn a different scenario, where the training and test questions are over the same set of mentions, we agree that updating mention embeddings might lead to further improvements.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the valuable feedback. We have updated the paper draft to reflect the feedback. In particular we have added section 3.3 with new results on HotpotQA.\\n\\n1. Regarding G and F in Eq. (3):\\n\\nThe purpose for the first term in Eq. (3) is to have a sparse mapping from an entity to the mentions related to that entity. Indeed, the reviewer correctly points out that there is no need for this mapping to be computed using TF-IDF features. Any of the latest techniques from the IR literature can be used to compute this fixed mapping, and this is definitely an area for future work to investigate. We use TF-IDF since these are easy to compute, highly scalable, and work pretty well in practice.\\n\\n2. Regarding the softmax temperature:\\n\\nSince we are using a maximum inner product search to retrieve the top-K mentions, the relevance scores s(m, z_{t-1}, q) for these are all usually high. Using a softmax function directly on top of these scores leads to a very peaked distribution, and for multi-hop questions, this results in effectively only one entity being passed from one hop to the next. The softmax temperature flattens out this distribution, so the model can explore many paths from one hop to the next.\\n\\n3. Regarding DrKIT and end-to-end Memory Networks:\\n\\nThe key difference between our work and memory networks, is that in our case the memories (or the index) is grounded in actual textual mentions from a potentially large text corpus. This results in a much larger size of the memory as compared to what was used in [1]. This necessitates the use of pretraining to learn their representations, and a MIPS operation to retrieve from them, which was not explored in [1]. Further, our retrieval from the index is a set of entities rather than a high-dimensional vector, which allows us to combine the strengths of sparse and dense retrieval strategies.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the valuable feedback.\\n\\n====> Regarding evaluation on HotpotQA <====\\n\\nWe have added additional results in a new Section 3.3. The model is not directly applicable to HotpotQA since answers in that dataset are not necessarily entities. However, following very recent work [Godbole, 2019] we have identified a sub-task which is suitable for our model \\u2014 retrieving the two introductory passages from Wikipedia which are required to answer a question. Each passage is associated with the title entity of that page, so the task boils down to selecting two entities given the question. Further, this task is multi-hop, since in many cases the entities to be retrieved are related to a question entity via a path of implicit relations.\\n\\nIn section 3.3, we show that our approach outperforms [Godbole, 2019] by 6 points accuracy @10, the main metric for measuring retrieval performance. We also see more than 10x improvement in terms of inference time. When the retrieved passages are fed to a baseline reading comprehension model, we see an improvement of 8.5 F1 over a TF-IDF based retrieval, and we conjecture this can be further improved by using a more sophisticated reading comprehension model.\\n\\n[Godbole, 2019] Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. EMNLP, 2019.\\n\\n====> Regarding the importance of pretraining <====\\n\\nThe reviewer raises an important point about the limitations posed by pre-training on the KB.\\n\\nRegarding the point about using the KB at test time, we think the model could conceptually be extended to use KB triples as well as text, but in this paper we focused on QA from text alone. It is important to verify, however, that the model is reading the text in some generalizable way, not just memorizing the KB triples. We note that in our ablation study we compare using 50% KB for pre-training vs using it directly at test time for answering questions. The former is between 20-30% better across the 3 hops, suggesting that the pre-training allows generalization beyond the facts present in the KB. Furthermore, as the reviewer points out, on Wikidata we test on an entirely different subset of entities than used for pretraining, so in this case using the KB at test would provide no improvements.\\n\\nAdditionally, we have added additional results for multi-hop information retrieval on the HotpotQA dataset in section 3.3. Part of these experiments compare using the WikiData KB vs. the HotpotQA questions themselves for pretraining. While the HotpotQA based pretraining is better in accuracy @2, they are both very similar in accuracy @10, and both are significantly better than the best baseline of [Godbole, 2019]. Since, the construction of this dataset was not based in any way on the WikiData KB, the fact that the KB based pre-training works well for it suggests that it may be generally applicable for many types of multi-hop questions. A more detailed investigation of this aspect is, however, beyond the scope of this paper.\\n\\n====> Regarding how the number of hops T was selected for the model <====\\n\\nYes, following previous work, we assume the number of hops to be known for MetaQA and WikiData experiments.\\n\\nIn cases where this cannot be determined in advance, we can use a soft mixture of the outputs after each hop. This is the case for the newly added HotpotQA results, where we used an additional classifier (trained end-to-end with the other parameters of the model) to determine the number of hops needed for each question softly). Please see section 3.3 for details.\\n\\n====> Regarding reducing the size of KB for pretraining <====\\n\\nWe believe reducing the size of the KB hurts because the quality of the pre-trained index reduces. This hurts 2-hop and 3-hop questions more because of error cascading -- with more hops there are more retrieval steps against the index.\\n\\n====> Regarding the model definition in 2.1 <====\\n\\nThe model architecture is identical at each hop, but the query representations used for retrieval are different. This results in a different relevance scoring function for each hop, and we have updated the paper to reflect this better (using s_t instead of s to denote the scoring function). Note that implicitly, the scoring function picks out the mentions which satisfy the relation requested for that hop, with an entity output by the previous hop (through z_{t-1}).\\n\\n====> Regarding analysis of each hop <====\\n\\nWe have added analysis of the intermediate predictions made by the model for 2-hop questions in MetaQA in section 3.1 (Analysis). For 100 correctly answered questions, we found that for 83 the intermediate answers were also correct. In the other 17 cases, the intermediate answer was the same as the final answer \\u2014 essentially the model learned to answer the question in 1 hop and copy it over for the second hop. Among incorrectly answered questions, the intermediate accuracy is only 47%, so the mistakes are evenly distributed across the two hops.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies scaling multi-hop QA to large document collections, rather than working with small candidate lists of document/paragraphs (as done in most of the previous work), a very important, practical and challenging direction.\\n\\nThey start with linking mentions to entities in a knowledge base. Every iteration of their mult-hop system produces a set of entities Z_t, relying on entities predicted on the first representation Z_{t-1} and the question representation. In order to make training tractable, they mask 'attention' between Z_{t-1} and Z_t (actually mentions corresponding to Z_t). They also use top-K relevant mentions at train and test time. As the attention score is based on dot-product, they can plug-in the approximate Maximum Inner Product Search to avoid computing the attention score for every mention in the collection. The architecture is essentially end-to-end trainable (except for specialized pretraining discussed below). \\n\\nWhereas itself the architecture is not overly novel (e.g., the architecture does feel a lot similar to models in KB context, and also graph convolution networks applied to QA), there is a lot of clever engineering and the novelty is really in showing that it can work without candidate preselection.\\n\\nMy main worry is pretraining. In both experiments (MetaQA and the new Wikidata Slot Filling task), they pretrain the encoders using a knowledge base, and the knowledge base directly corresponds to the QA task. E.g., for MetaQA the questions are answerable using the knowledge bases, so relation types in the knowledge base presumably correspond to relations that need to be captured in the QA multihop learning. This specialized pretraining appears to be crucial (88% for pretraining vs 55% with BER), presumably because of the top K pruning. Though a nice trick, it is likely limiting as a knowledge base needs to be available + it probably constraints the types of mult-hop questions the apporach can handle. Also, some of the baselines do not benefit from using the KB, and, in principle, if it is used in training, why not use the KB at test time? (I see though that for the second dataset pretraining seems to be done on a different part of Wikipedia, I guess, to address these concerns).\\n\\nI was not sure how the number of hops T was selected for the model, it does not seem to be defined in the paper. Do you pretend that you know the true number of hops for each given question?\\n\\nThe authors experiment with reducing the size of a KB for pretraining. It apparently does not harm the first 1 hop questions, but 2 and 3-hop. Do the authors have any explanation for this? Related to the previous question, does it mean that the model does not learn to exploit the hops for t > 1?\\n\\nThe evaluation is on MetaQA and on the newly introduced Wikidata task, whereas most (?) recent multi-hop QA work has focused on HotpotQA (and to certain degree WikiHop). Is the reason for not using (additionally) HotpotQA? Is the model suitable for HotpotQA? If not, does this have to do with pretraining or the types of questions in HotpotQA?\\n\\nThe model definition in section 2.1 is not very easy to follow. E.g., it is not immediately clear if the model applied at every hop is the same model, and not clear how the model is made aware of the current search state (e.g., which part of the question has already processed / how the history is encoded) or even of the hop id. \\n\\nI would really like to see more analysis of what the model learns at every hop.\\n\\n-- \\nAfter the rebuttal -- I appreciate the detailed feedback, extra experiments, and analysis. `I increased my score.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a model that can perform multi-hop question-answering based on a textual knowledge base. Results show that the proposed model -- DrKIT -- performed close to or better than the state of the art results over MetaQA and WikiData datasets. Ablation study is offered to show that a few tricks in the model are necessary to make it work, and comparisons with baseline models such as DrQA and PIQA are presented. The paper also provides additional means to speed up the DrKIT model using the hashing trick and approximated top-k methods.\\n\\nThe paper is a good one and I vote for its acceptance. Besides achieving good performance, the proposed DrKIT model makes sense, and all the parts are necessary components based on the ablation study results. In addition, the ablation study and the speed-up methods are great addition to the model to make it work better.\\n\\nWith this generally positive assessment said, I do have a few questions below that I hope the authors could provide some response. These are on top of the high quality of the paper, and should be best regarded as suggestions for future work.\\n\\n1. In equation (3), do G and F have to be TFIDF features? The likes of word2vec and GloVe (and also pershaps fastText) are trained based on co-occurences of adjacent words, and I would imagine that they will improve over TFIDF. This is just an intuition and I could be wrong, but it would be very helpful to hear the authors' opinions.\\n\\n2. The ablation study mentioned that the softmax temperature helps with the model. This is a nice observation, but is there any intuition behind why that is the case? I could imagine that it could be because the gradients of a saturated softmax function is small and therefore results in slow training of the model. If this is the case, both low temperature and high temperature will fail to work. It would have been better so show both ends of failing extremes in an ablation study.\\n\\n3. Can you discuss the similarity between DrKIT and multi-hop End-to-End Memory Networks [1]? It looks very much like an expansion of it with a fixed retrieval mechanism and by expanding the answer to a set rather than a single vector.\\n\\n[1] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, End-To-End Memory Networks, NIPS 2015\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a new architecture for question answering, that can be trained in an end-to-end fashion. The model expands the entities in a given question to relevant mentions, which are in turn aggregated to a new set of entities. The procedure can be repeated for multi-hop questions. The resulting entities are returned as candidate answers. The approach relies on an index of mentions for approximate MIPS, and on sparse matrix-vector products for fast computation. Overall, the model processes queries 10x faster than previous approaches. The method provides state-of-the-art results on the MetaQA benchmark, with significant improvements on 3-hop questions. The experiments are detailed, and the paper is very well written.\\n\\nA few comments / questions:\\n\\n1. Do you have any explanation of why taking the max instead of the sum has a significant impact on the 2,3-hop performance, but only gives a small improvement for 1-hop questions?\\n\\n2. The same observation can be done for the temperature lambda=1 vs lambda=4, so I was wondering about the distribution of the entities you get on the output of the softmax (in Eq 4). Is the distribution very spiky, and Z_t usually only composed of a few entities? In that case, I guess lambda=4 encourages the model to explore/update more relation paths? Something that works very well in text generation is to not just set a temperature, but also to apply a softmax only on the K elements with the highest score (so the softmax is applied on K logits, and everything else is set to 0). Did you consider something like this? It may prevent the model from considering irrelevant entities, but also from considering only a few ones.\\n\\n3. Given the iterative procedure of the method, I wonder how well the model would generalize to more hops. Did you try for instance to train with 1/2 hops and test whether it can generalize to 3-hop questions?\\n\\n4. In Section 2.3, you fix the mention encoder because training the encoder would not work with the approximate nearest neighbor search (I assume this is because the index would need to be rebuilt). However, the ablation study suggests that the pretraining is critical, and one could imagine that fine-tuning the mention encoder would improve the performance even further. Instead of considering a mention encoder, could you have a lookup table of mentions (initialized with BERT applied on each mention), where mention embeddings are fine-tuned during training? The problem of the index is still there, but could you consider an exact search on the lookup table (exact search over a few million embeddings is slow, but it should still run in a reasonable amount of time using a framework like FAISS, and it would give an upper-bound of the performance you could achieve by fine-tuning the mention encoder).\"}"
]
} |
S1xitgHtvS | Making Sense of Reinforcement Learning and Probabilistic Inference | [
"Brendan O'Donoghue",
"Ian Osband",
"Catalin Ionescu"
] | Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts ‘RL as inference’ and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular ‘RL as inference’ approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.
| [
"Reinforcement learning",
"Bayesian inference",
"Exploration"
] | Accept (Spotlight) | https://openreview.net/pdf?id=S1xitgHtvS | https://openreview.net/forum?id=S1xitgHtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ohHi5BPwY",
"gD2zMD1Ou",
"ULKoHfjLfb",
"rkevha9_ir",
"ryx5s-wNoB",
"HJg7MbPNjH",
"ryx8nxwNsr",
"S1xZvxv4sS",
"ryez-G1Lqr",
"S1gAoVaTYH",
"B1eIQQVUtS"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576958671877,
1576955793532,
1576798749358,
1573592495370,
1573314978113,
1573314826852,
1573314734487,
1573314649000,
1572364793607,
1571832997874,
1571336990379
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"~Matthew_Fellows1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2446/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2446/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2446/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2446/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for the reference\", \"comment\": \"I do not think we were aware of your paper, but we will make sure to investigate this!\\n\\nDo you have a summary for how you feel this work is related? Maybe we can have this discussion directly via email?\\n\\nMany thank!\"}",
"{\"title\": \"Important Related Work\", \"comment\": \"In our recent NeurIPS paper, VIREL: A Variational Inference Framework for Reinforcement Learning (Fellows et al. 2019), we provide a discussion of the shortcomings of the maximum entropy reinforcement learning (MERL) framework as well as a simple counterexample to demonstrate that for several MDPs, the optimal policy under the classical RL objective cannot be recovered from the optimal MERL policy. We also introduce an alternative framework that does not suffer from the issues of existing RL and inference methods. This seems very relevant to the discussion in Section 3.3 in this work.\"}",
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper explores in more detail the \\\"RL as inference\\\" viewpoint and highlights some issues with this approach, as well as ways to address these issues. The new version of the paper has effectively addressed some of the reviewers' initial concerns, resulting in an overall well-written paper with interesting insights.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Uploaded new revision\", \"comment\": \"Thanks again to the reviewers, we have just uploaded a new revision which addresses most of the reviewers concerns.\"}",
"{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"Thank you for your review and for your suggestions. We completely agree that the experiments needed to be improved, in particular the bsuite ones. We have put a significant amount of work into improving them, and making it clear what the point of each one was, as well as presenting the results of the experiments more clearly. Moreover, we have added new experiments using function approximation for both K-learning and Thompson sampling which suggest that the claims we make in the paper in the tabular case carry over to the function approximation case, at least empirically.\\n\\nWe do not make any claims about the advantages of K-learning or Thompson sampling over each other, they appear to have similar performance empirically, and the best RL Bayesian regret bounds for each algorithm are identical (up to logarithmic terms). We have tried to be clearer about this in the manuscript. The reason we mention them is to highlight the differences (and similarities where appropriate) between these two techniques and the RL as inference framework, which is unable to handle epistemic (Bayesian) uncertainty.\\n\\nThank you also for the minor remarks and your eagle eye in spotting typos, we have corrected or clarified all of them!\"}",
"{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"Thank you for your careful and thorough review. We have tried to address your major concern, which was the lack of accessibility of the paper, by\\n\\nMaking the assumptions clearer\\nFixing the notation\\nImproving the clarity of the language overall.\\n\\nWe have also tried to make it clear that we are not proposing a new algorithm, merely highlighting a shortcoming about RL as inference, and contrasting with alternative approaches. Moreover, it is not our intention to claim that K-learning is by any means the solution to the exploration problem, just that it, along with Thompson sampling (and other algorithms that we don\\u2019t focus on) do actually explore in a directed manner. We also make the connection of K-learning to the current \\u2018RL as inference\\u2019 framework as it is currently understood in the literature. The reason we do this is primarily because the RL as inference framework has inspired many new and interesting algorithms in many subfields of RL, including hierarchical RL, options / skills, multi-agent, empowerment etc. We hope that highlighting a major issue that the framework suffers from, and demonstrating that a fix is possible, that the performance of these algorithms can be improved further with (hopefully) relatively little change.\\n\\nAs for the tabular vs function approximation issue, we do have a section entitled \\u2018Why is RL as inference so popular?\\u2019, in which we say \\u2018Further [the RL as inference derived algorithms] are often easy to implement and amenable to function approximation\\u2019. We want to stress that we understand that the RL as inference framework actually has a lot of value, but that the issue of sub-optimal exploration needs to be addressed. It is an open question as to how best to implement Thompson sampling and K-learning with function approximation - we have made this point more clear, though in response to your concern we have added some experiments with function approximation for both K-learning and Thompson sampling. These experiments suggest that it is at least possible.\\n\\nYou are correct that we have slightly modified the presentation of RL as inference. This is to make it easier to compare with the other techniques we compare against. However, the formulation is not new. The paper by Deisenroth et al. 2013 \\u201cA survey on policy search for robotics\\u201d uses the same presentation. In section 2.4.2.2 the authors of that paper propose P(O = 1| tau) \\\\propto exp(R(tau)), where R(tau) is the reward along the entire trajectory tau (they use an overloaded R instead of O to denote \\u2018optimality\\u2019, but other than that it is identical). The point here is that although the presentations differ ultimately the framework is the same. We have updated the comment to be clearer on this, and fixed the derivation of soft Q-learning in the appendix.\"}",
"{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"Thank you for your detailed review, and for your kind words! In response to your concerns:\\n\\nYou are right, this was confusing. Indeed the *true* MDP is sampled only once and is the same for all learning thereafter, though Thompson sampling samples an MDP from the posterior at each iteration as part of the learning. We have clarified this earlier in the paper.\\n\\nWe have tried to merge all three algorithms into a single table, but it\\u2019s quite dense. We\\u2019ll keep experimenting with it and hopefully we\\u2019ll have a good compromise by the time we resubmit a revision, which we\\u2019re hoping to upload in the next couple of days, though it might just be the case that three separate tables is cleanest unfortunately!\\n\\nWe have made efforts to clean up the notation, especially with respect to the derivation of soft Q-learning.\\nWe have added a form of deep K-learning and Bootstrapped DQN to the results of section 4.3 to suggest that our claims likely carry over to the function approximation case.\\n\\nYou are right, that\\u2019s not the correct reference at that point, we have updated it to the Eysenbach 2018 paper.\"}",
"{\"title\": \"Thank you to the reviewers\", \"comment\": \"We would like to thank the reviewers for their careful consideration of our paper. All three reviewers highlighted important points that, once addressed, will improve the quality of our manuscript. We would also like to thank the reviewers for their kind words on the value of the paper. In particular both R1 and R3 both highlight the overall clarity of exposition and how the minimal examples highlight the issues we want to address, and R2 noted that the paper will be a valuable addition to the current understanding of RL and inference.\\n\\nWe have responded below to each reviewer and we have outlined the changes to the paper we will make in response. We are hoping to upload a new revision in the next couple of days. Overall, we are putting in a significant effort to improve clarity, notation, and accessibility. Furthermore, we have significantly improved the experiments, which we agree were confusing. In particular, we have added a deep implementation of K-learning and a deep variant of Thompson sampling (both using neural net function approximation) to the experimental results. The take home message is that both these implementations improve over soft-Q-learning when it comes to exploration, even when using function approximation.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The authors develop a criticism of the \\\"RL as inference\\\" standard approximations and propose a simple modification that solves its main issues while keeping hold of its advantage. Even though this modification ends up relating to a previously published algorithm, I judge the submission to be worthwhile publishing for the following contributions:\", \"clarity/didacticism of the exposition, the minimal problem, the positioning,\", \"the theorem,\", \"the (hopefully to be completed) experiments\", \"The experiments are my main criticism of the paper, in particular the bsuite ones that was absolutely impenetrable for me: not only the experiments but also the results. I hope this will be completed in the final version. It was also a bit unclear to me the advantage of K-learning over Thomson sampling methods.\"], \"minor_remarks_and_typos\": [\"famliy => family\", \"I would not say that frequentist RL is the worst-case, but more high-probability (it's the worst case within the concentration bounds).\", \"the agent in then => the agent is then\", \"KL has 2 meanings in the notations: K-learning and KL divergence. For clarity, I suggest to use only K for K-learning (for instance).\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper at hand presents an alternative view on reinforcement learning as probabilistic inference (or equivalently maximum entropy reinforcement learning). With respect to other formulations of this view (e.g. Levine, 2018; I am referring to the references of the paper here), the paper identifies a shortcoming in the disregard of the agent\\u2019s epistemic uncertainty (which seems to refer to the uncertainty with respect to the underlying MDP). It is argued, that algorithms based on the prevailing probabilistic formulation (e.g. soft Q-learning) suffer from suboptimal exploration.\\nThe paper thus compares maximum entropy RL to K-learning (O\\u2019Donoghue, 2018), which is taken to address the issue of suboptimal exploration due to its temperature scheduling and its inclusion of state-action pair counts in the reward signal. \\n\\nAs its technical contribution, the paper re-interprets K-learning via the latent variable denoting optimality employed in Levine (2018) and introduces a theorem bounding the distance between the policies of Thompson sampling and K-learning. Empirical validation of the claims is provided via experiments on an engineered bandit problem and the tabular MDP (i.e. DeepSea from Osband et al., 2017), as well as via soft Q-learning results on the recently suggested bsuite (Osband et al., 2019).\\n\\nI consider this paper a weak reject. This is in light of me finding it very hard to follow the papers main claims and arguments, even though it positions itself as communicating connections (\\u201cmaking sense\\u201d) in prior work, rather than presenting a novel algorithm. While this is in part due to the complicated issue and math being discussed (and the paper probably catering to a very narrow audience), the paper in its current state does seem to hinder understanding as well.\\n\\nOn the positive side, I do appreciate the intention of the paper, namely to connect RL as probabilistic inference, Thompson sampling and K-learning. In my opinion, this can be taken as a valuable addition to the current understanding of these approaches. Also, I like the experiments as they are specifically constructed to support the claims of the paper.\\nOn the negative side, vague language, missing assumptions and lax notation seem to hinder the understanding of the paper to a considerable extend: e.g. it is stated, that \\u201cwe connect the resulting algorithm with [\\u2026] K-learning\\u201d. However, I do not recognize a new algorithm being provided. Instead the paper argues in favor of K-learning. The assumptions that come with K-learning are not mentioned. The restriction of K-learning to tabular RL is taken to be understood implicitly (whereas RL as probabilistic inference seems applicable with function approximation also, which is not mentioned in the comparison). The paper always talks of shortcomings (plural) of RL as probabilistic inference, but only provides one argument (suboptimal exploration) with respect to this. RL as probabilistic inference is introduced in a different form as in prior literature (i.e. Equation 6), while the derivation in the Appendix spanning the differences in notation being hard to follow due to (maybe minor?) notational issues (e.g. x and y seem to have replaced s\\u2019 and a; further down there is a reference to Equation 7, however probably it is meant to be 8 and even that with some leap in notation).\\nThe paper would benefit from better proof-reading, where mistakes in a very dense argumentation make it hard to follow (e.g. I do not understand the sentence \\u201cThe K-learning expectation (7) is with respect to the posterior over Q[\\u2026] to give a parametric approximation to the probability of optimality.\\u201d)\\n\\nLiterature wise, the paper draws heavily from two unpublished papers (Levine, 2018; O\\u2019Donoghue, 2018). While this makes it harder to arrive at a high confidence level with respect to the paper\\u2019s claims, I would not argue this to be critical.\\nI would consider raising my score, if the authors would improve the accessibility of the paper by polishing the argumentation and notation.\", \"confidence\": \"low. It is very likely, that I have misunderstood key arguments and derivations. Also, I did not attempt to follow all of the technical derivations.\\n\\n\\n======\", \"post_rebuttal_comment\": \"I changed the score of my review in light of the rebuttal.\\nThe changes made to the paper overall address my concerns.\\nI do consider the additional explanations and re-phrasings as well as the improved notation a nice improvement of the paper.\\nWhile I did not read all of the appendix, Section 5.1 is much more readable and understandable in the new version.\\n\\nIn light of this paper probably being published, I share some typos/inconsistencies I still noticed:\\n\\np. 4: the solving for -> solving for the\\np. 7: s_{h+1} -> s' (in Table 3) ?\\np. 7: table -> Table; tables -> Tables\\np. 7: (Fix position of K:) \\\\pi_h(s)^K -> \\\\pi_h^K(s) ((also in Appendix))\\np. 9: (2x) soft-Q learning -> soft Q-learning; Q Networks -> Q-Networks; Soft Q -> soft Q-learning\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper criticizes the \\u2018RL as inference\\u2019 paradigm by highlighting its limitations and shows that a variant of this framework - the K-learning algorithm (O'Donoghue et. al., 2018) does not have these limitations. The paper first clarifies some points of confusion regarding RL as inference, namely the fact that RL was originally an inference problem all along. A simple example is used to demonstrate that the RL as inference framework (Levine, 2018) fails to choose the optimal actions that resolve epistemic uncertainty, whereas the K-learning algorithm does select the optimal action. Further, a connection is made which reveals that K-learning is an approximate version of Thompson sampling - the strategy of using as single posterior sample of parameters given data for greedy actions which originated in bandit settings. Some empirical results are provided highlighting the cases where Soft Q-learning (Levine, 2018) fails but Thompson sampling and K-learning do not.\\n\\nI vote for accepting this paper as it brings to light an important limitation of the popular RL as Inference framework with a didactic example which, to the best of my knowledge, has not been shown before.\\n\\nThe paper does a great job at succinctly introducing a simple bandit problem where the bayes-optimal policy is to take a first action that is supposed to immediately resolve all epistemic uncertainty and then exploit the optimal action repeatedly for future plays. However, this simple problem is designed in such a way that there are several other sub-optimal actions which make the RL as inference algorithm have an exponentially low probability of selecting the optimal action. This implies that RL as inference, unlike Thompson sampling, does not in fact take into account epistemic uncertainty.\", \"feedback_to_authors\": [\"The introduction of family of MDPs caused a lot of confusion about the problem setting. I was not sure if a new MDP is sampled from \\\\phi at every episode in L or a single MDP is sampled and kept the same throughout. This was clarified later on in the middle of section 2.1, but it could have been introduced more carefully earlier on,\", \"The tables 1-3 summarizing algorithms are useful but it would be great if there could be a side by side comparison of all three in a single table.\", \"The notation is very dense and I see that efforts were made to avoid this, but it still feels inaccessible.\", \"I am not sure of the role of experiments in section 4.3, if there is no comparison to K-learning. I understand that the authors leave it to future work but then the experiments feel out of place.\", \"\\u201c... RL as inference has inspired many interesting and novel techniques, as well as delivered algorithms with good performance on problems where exploration is not the bottleneck (Gregor et al., 2016)\\u201d. I think this sentence is false, Gregor et. al. do not employ RL as inference anywhere in their paper. Also, I don\\u2019t think the point of their paper was to show good performance on any problem. Maybe this was mixed up with Eysenbach, 2018, a successor paper which uses RL as inference?\"], \"references\": \"O'Donoghue, Brendan. \\\"Variational Bayesian Reinforcement Learning with Regret Bounds.\\\" arXiv preprint arXiv:1807.09647 (2018).\\n\\nLevine, Sergey. \\\"Reinforcement learning and control as probabilistic inference: Tutorial and review.\\\" arXiv preprint arXiv:1805.00909 (2018).\\n\\nGregor, Karol, Danilo Jimenez Rezende, and Daan Wierstra. \\\"Variational intrinsic control.\\\" arXiv preprint arXiv:1611.07507 (2016).\\n\\nEysenbach, Benjamin, et al. \\\"Diversity is all you need: Learning skills without a reward function.\\\" arXiv preprint arXiv:1802.06070 (2018).\"}"
]
} |
BJlqYlrtPB | Negative Sampling in Variational Autoencoders | [
"Adrián Csiszárik",
"Beatrix Benkő",
"Dániel Varga"
] | We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as a source of negative samples results in a fully unsupervised model that can be interpreted as adversarially trained.
| [
"Variational Autoencoder",
"generative modelling",
"out-of-distribution detection"
] | Reject | https://openreview.net/pdf?id=BJlqYlrtPB | https://openreview.net/forum?id=BJlqYlrtPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"zHl_Nylh-T",
"Sylxlkrhsr",
"SJghYWEhsH",
"Hke5Kl43oS",
"HJg7vJE2ir",
"rkg5xRX2jH",
"S1leviI0YB",
"BkgalbL3FH",
"H1gQtiyvtH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749327,
1573830376122,
1573826947593,
1573826690327,
1573826395338,
1573826033764,
1571871576494,
1571737845233,
1571384186653
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2445/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2445/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2445/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2445/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2445/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes to improve VAEs' modeling of out-of-distribution examples, by pushing the latent representations of negative examples away from the prior. The general idea seems interesting, at least to some of the reviewers and to me. However, the paper seems premature, even after revision, as it leaves unclear some of the justification and analysis of the approach, especially in the fully unsupervised case. I think that with some more work it could be a very compelling contribution to a future conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviewers, any comments on the author response?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to know. The authors still have one more day to respond/revise further. Thanks!\"}",
"{\"title\": \"Paper update\", \"comment\": \"We thank the reviewers for their many helpful comments. Incorporating them improved the paper tremendously, and we apologize in advance for pushing the limits of how much a paper can change during the rebuttal period.\", \"we_have_uploaded_a_new_version_of_the_paper_with_significant_improvements\": [\"we further strengthen our experimental results: our new measurements show that our models improve on the baselines in a very consistent manner,\", \"we have restructured the text for a clearer exposition and presentation,\", \"we have removed the erroneous claim from Section 3. We thank AnonReviewer2 for pointing it out.\", \"Based on our detailed answers and the results of the new version, we kindly ask the reviewers to reassess their evaluation.\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the important feedback. Based on it, we have made significant improvements on the paper. We have made clarifications and restructured the text for a better exposition and clearer message. Also, now we present even stronger experimental results and a more detailed investigation in several aspects.\\n\\n1) Regarding novelty: it was not our aspiration to design an intricate model. Rather, we would like to give a simple and general approach to alleviate the bad OOD likelihood phenomenon in VAEs. We believe that our work is a valuable contribution to an ongoing discussion in the research community about out-of-distribution detection in likelihood-based models (see e.g. the works in the related work section or e.g. concurrent work submitted to this conference: https://openreview.net/forum?id=Skg7VAEKDS ).\\n\\n- To the best of our knowledge, we are the first to construct a training method that alleviates the bad OOD likelihoods phenomenon for VAE models. We do not just investigate and conduct detailed experiments with the Outlier Exposure technique in the VAE setting, but also present a completely new fully unsupervised approach.\\n\\n- We have added a short section in the paper that discusses the choice of $\\\\bar{p}$.\\n\\n- Regarding more sophisticated models: please note that Nalisnick et al 2019 (https://openreview.net/pdf?id=H1xwNhCcYm ) report identical problems by other maximum likelihood models with very strong modeling capacity, such as flow-based models and PixelCNNs. Our expectation is that basically any generative maximum likelihood model is affected by these issues. It is a question of future research how best to adapt our approach to other likelihood-based models.\\n\\n2) We have added a section in the paper that discusses this question, titled \\\"why using generated data as negative samples could help?\\\". To summarize:\\n\\n- Regarding the argument regarding a fully trained model, in practice, true data samples and generated samples can be distinguished even for fully trained models. But even assuming a perfect generator at convergence, during the training process the generated samples might still help to guide the model toward an equilibrium that promotes a lower likelihood for OOD samples.\\n\\n- Regarding data augmentation as a source of negative samples: If the augmentation actually keeps the samples within the true data manifold, then distinguishing between true and augmented data is something that we might not want to promote. The encoder would probably learn specific minor visual clues (e.g. bilinear filtering artifacts for rotations) that do not usually help assigning lower likelihood to OOD samples.\\n\\n- Regarding the performance of our models on color images: in the updated version of the paper we have tuned our color image models by using spectral normalization layers in the encoder. This change significantly improved the AUC values (e.g., from 0.53 to 0.85) for the models using generated negative samples, but did not improve the AUC values of the baseline models.\\n\\n3-1) We employed the first option, sampling from the positive prior. We have made clarifications in the text.\\n\\n3-2) Unfortunately, increasing the latent dimension does not help alleviating the bad OOD likelihood phenomenon. We have included an experiment in Appendix C that demonstrates this. We thank the reviewer for suggesting this investigation.\\n\\n3-3) We have experimented with both a fixed value and a learned global value. Both cases resulted in a similar behaviour.\\n\\n4) We tried to keep the experimental setup clean for the purposes of analysis, but indeed, in an engineering context we would definitely use a set of negative samples as diverse as possible.\\n\\nWe are grateful for the valuable feedback, it greatly helped us to improve our paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": [\"We thank the reviewer for the valuable feedback, it helped a lot to improve our paper. We have made clarifications in many places and restructured the text for a better exposition and a clearer message.\", \"In our humble opinion, our results are easier to appreciate in the fuller context of the growing amount of work related to out-of-distribution detection in likelihood-based models (the works in the related work section or e.g. concurrent work submitted to this conference: https://openreview.net/forum?id=Skg7VAEKDS ). Our contributions reflect on recent work, and provide several novelties:\", \"To best of our knowledge, we are the first to give a training method for VAEs that helps alleviating the bad OOD likelihood performance.\", \"We present an unsupervised approach that is completely novel, and report detailed experiments that confirm the robustness and usefulness of the method (see Table 1 and Table 2 in the updated paper).\", \"Our work highlights a potential problem with utilizing Outlier Exposure (the very general framework laid down by Hendrycks at al. 2018). The results with auxiliary datasets in Table 1 show that while auxiliary samples help greatly in most cases, OOD detection performance can be very sensitive to the choice of the auxiliary dataset, see for example the last block of Table 1, Letters-Fashion-Numbers, where Outlier Exposure fails to improve while our adversarial method still achieves good performance.\", \"We have expanded the description and the discussion of the experiments which now also continues in the appendix.\", \"We have updated the explanations and motivations in several places where the review identified issues.\", \"We have added an experiment in the appendix which explores the utilization of the reconstruction term for the negatives during training. Our experiments show that this does not improve the model in terms of OOD performance.\", \"Regarding Bernoulli: we agree with the reviewer that the Bernoulli is a theoretically less sound modeling choice than, for example, the Gaussian. (We added a footnote to the paper with this remark.) That's one of the reasons we publish numbers with a Gaussian noise model as well. However, much of the literature directly relevant to us made the exact same modeling choice: working with the Bernoulli, and interpreting the grayscale values as probabilities of binary events. Loaiza-Ganem and Cunningham 2019 https://arxiv.org/abs/1907.06845 lists several papers following this practice. We side with the reviewer in this disagreement. However, as we said above, the Bernoulli is an unavoidable option if we wish to compare our results to the rest of this sub-field.\", \"We thank the reviewer for the pointer for the Hu et al. paper. As we see, the main contribution of our paper is not in proposing a hybrid VAE-GAN model, but in tackling the issue of bad OOD likelihoods both in a supervised and unsupervised context. We clarified in the text how the adversarial model is trained.\", \"We have added sample images for all datasets to Appendix D.\", \"We are very thankful for the review, it helped us a lot to improve the paper.\"]}",
"{\"title\": \"Response to Review #2\", \"comment\": [\"We are grateful to the reviewer for the insightful comments. We have rewritten large parts of the text with the goal of making the descriptions of our models and our core claims more clear. Just as importantly, we have an updated experiments section with even stronger results. (See e.g. Table 1 or Table 2 in the updated paper, which highlight the effectiveness of our proposed method.)\", \"Regarding the KL term: indeed, the latter interpretation is the intended one. $\\\\bar{x}^{(i)}$ is specified to be a negative sample, and the extra term only references $\\\\bar{x}^{(i)}$, not $x^{(i)}$. We have made clarifications in the text.\", \"We are deeply thankful to the reviewer for pointing out that we made a mistake in writing up a variational model justifying our loss function. To our defense, this toy model was not central to our argument. We have removed it from the paper.\", \"Regarding the positive KL term as a measure: examining the VAE likelihood estimates raises the question of how the two components of the ELBO (the reconstruction part and the KL part) contribute to the likelihood estimate and the discriminative power of the model. As the magnitude and the behavior of the reconstruction term are highly determined by the choice of the noise model, it is natural to investigate to what extent the separation between inliers and outliers is carried out in latent space. Note that we publish likelihood-based evaluations everywhere, one can consider the KL-based evaluations as extra information.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe authors propose augmenting VAEs with an additional latent variable to allow them to detect out-of-distribution (OOD) data. They propose several measures based on this model to distinguish between inliers and outliers, and evaluate the model empirically, finding it successful.\\n\\nUnfortunately, the method in this paper is developed unclearly and incorrectly. Although their experiments are somewhat successful, the problems with the text and method are severe enough to justify rejection.\\n\\nSpecifically, the authors' method proposes adding a term to the loss of the VAE that encourages the variational posterior (q) to distribute latent codes (z) for inliers and outliers differently. The equation which defines their new objective is unclear -- specifically, it is not clear whether the added KL term is computed for inliers and outliers both, or whether it is only computed for outliers. If it is the former, then the method does not make sense. If it is the latter, then the equation is incorrect or at the very least not clear in the extreme.\\n\\nFurthermore, the term is added without consideration of whether or not the method is still optimizing a sensible variational lower bound. The authors attempt to justify the objective by writing out a variational lower bound for a VAE with a mixture prior where inliers and outliers are generated from different mixture components. However, their equations are incorrect -- the equation that is called the log likelihood is not the log likelihood, and the ELBO is similarly wrong.\\n\\nTheir empirical evaluation is reasonable, although the measures they propose to distinguish between inliers and outliers (i.e. the kl from the approximate posterior to the prior) is not thoroughly justified.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to counteract OOD problem in VAE by adding a regularization term to the ELBO. The regularizer is defined as the Kullback-Leibler divergence between a variational posterior for a negative sample and a marginal distribution over latents for negative data. The authors present experiments on MNIST and MNIST-like datasets, and CIFAR10 with SVHN. Unfortunately, I do not find the paper especially interesting. The motivation for adding the regularization term is not convincing. The experiments are insufficiently discussed.\", \"remarks\": [\"The paper proposes to ad a regularization to ELBO, namely, the Kullback-Leibler divergence between a variational distribution for a negative sample and a marginal distribution over latent variables for negative samples. I do not fully understand the motivation given on page 3. The authors show that including the negative data yields a new objective that is a sum of two log-lihelihood functions for \\\"real\\\" and negative data. However, later they propose to skip a (negative) reconstruction error term for the negative data. As a result, the authors obtain the objective they proposed. This explanation is very vague and I do not see what it adds to the story. Contrary, it causes new questions about their model and whether it is properly formulated.\"], \"i_suggest_to_look_into_the_following_paper_to_see_whether_the_model_could_be_re_formulated\": \"Hu, Z., Yang, Z., Salakhutdinov, R., & Xing, E. P. (2017). On unifying deep generative models. arXiv preprint arXiv:1706.00550.\\n\\n- I do not understand why the authors used Bernoulli distribution to model color and gray-scale images. The Bernoulli distribution could be used only for binary random variables. This is obviously flawed.\\n\\n- In general, the results seem to partially confirm claims of the paper, however, they are quite vague. First, utilizing a wrong distribution is demotivating (see my previous remark). Second, I miss a better description of models and, in general, experiments' setup. Third, all results are explained in a laconic manner (e.g., \\\"The other results in the table (...) confirm the assymetric behaviour of the phenomenon (...)\\\"). There is neither deeper understanding nor discussion provided.\\n\\n- Why there are no samples for CIFAR or SVHN provided?\\n\\n======== AFTER REBUTTAL ========\\nI would like to thank the authors for their rebuttal. I really appreciate that the paper is updated and some concerns are solved. After reading the updated paper again, I tend to agree that the proposed idea is interesting for the problem of OOD detection using generative models. Therefore, I decide to update my score.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper discusses the detection of out-of-distribution (OOD) samples for variational autoencoders (VAE).\\nThe idea is to train the encoder such that its output variational distribution q(z|\\\\bar{x}) is pushed away from the prior of latent z. \\nI think the paper needs more clarification and investigation for being published in the conference. \\nMy major concern is that more empirical investigation is necessary since the formulation provides a minor novelty. \\nSpecific points are given below. \\n\\n1) Weak novelty in terms of model design. \\nThe objective function consists of the standard (negative) ELBO term and additional KL term to modify the variational posterior of negative samples. \\nThis modification can be regarded as a form of outlier exposure (Hendrycks et al. 2018) specialized for VAE. \\nThe choice of \\\\bar{p} is not much investigated. \\nAny discussion if we use a more sophisticated model such as VampPrior* for stronger modeling capacity. \\n* J. Tomczak and M. Welling, VAE with a VampPrior, AISTATS 2018.\\n\\n2) The use generated samples as negative samples is interesting but mysterious. \\nThe authors conjecture that this works because the generated samples come from near the data manifold, but in-distribution samples and negative samples can be indistinguishable when the generative model is very well trained. \\nWhat happens if, for example, the negative samples are generated by data augmentation techniques (such as cropping, rotation, mirroring, though mirroring and much rotation may be unsuitable for text images)? \\nThis can also produce near-manifold points. \\nA deeper analysis why generated samples can improve the OOD detection performance is necessary. \\nFurthermore, why does not this approach impact much for color images in Table 4?\\n\\n3) More details of experimental procedures. \\n3-1) How was data points are generated from VAE as negative samples?\", \"possible_ways_are\": \"* sample z ~ p(z), then draw from the decoder x ~ p(x|z).\\n* use negative prior z ~ \\\\bar{p}(z), then draw from the decoder x ~ p(x|z).\\n* this seems weird: use variational posterior z ~ q(z|x), then x ~ p(x|z).\\n\\n3-2) Latent dimension of 10 for grayscale images seems small. \\nDoes the size affect the OOD detection performance when the size is 50 or 100 to make the model richer. \\n\\n3-3) How was the variance obtained when the decoder uses the Gaussian likelihood?\\n* fixed value?\\n* learned for each pixel?\\n* output from the decoder?\\n\\n4) If we have access to diverse negative datasets, can the ODD detection perform better? \\nMixing multiple datasets or using both available dataset and generated samples can improve the performance while the test OOD samples are kept unseen. \\nFor example, train VAE on MNIST while using KMNIST and EMNIST as the negative sets to detect Fashion-MNIST as ODD.\"}"
]
} |
HygqFlBtPS | Improved Training of Certifiably Robust Models | [
"Chen Zhu",
"Renkun Ni",
"Ping-yeh Chiang",
"Hengduo Li",
"Furong Huang",
"Tom Goldstein"
] | Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness. In principle, relaxation can provide tight bounds if the convex relaxation solution is feasible for the original non-relaxed problem. Therefore, we propose two regularizers that can be used to train neural networks that yield convex relaxations with tighter bounds. In all of our experiments, the proposed regularizations result in tighter certification bounds than non-regularized baselines. | [
"Convex Relaxation",
"Certified Robustness",
"Regularization"
] | Reject | https://openreview.net/pdf?id=HygqFlBtPS | https://openreview.net/forum?id=HygqFlBtPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"wVRpqamn_",
"r1l226VnsS",
"Hkge-pN2iH",
"HJlLJh4hir",
"r1eZ0iNniH",
"B1es6DN2iH",
"Hyl4nPE3sB",
"HyxG4FXV5B",
"HyeOL_lGcB",
"Hye3HcXoFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749300,
1573830068098,
1573829879518,
1573829598079,
1573829577478,
1573828546781,
1573828523772,
1572251946421,
1572108367733,
1571662403570
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2444/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2444/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2444/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The authors develop regularization schemes that aim to promote tightness of convex relaxations used to provides certificates of robustness to adversarial examples in neural networks.\\n\\nWhile the paper make some interesting contributions, the reviewers had several concerns on the paper:\\n1) The aim of the authors' work and the distinction with closely related prior work is not clear from the presentation. In particular, the relationship to the ReLU stability regularizer (Xiao et al ICLR 2019) and the FastLin/CROWN-IBP work (https://arxiv.org/abs/1906.06316) is not very well presented in the theoretical sections or the experiments.\\n\\n2) The theoretical results (proposition 1) requires very strong conditions to apply, which are unlikely to be satisfied for real networks. This calls into question the effectiveness of the framework developed by the authors.\\n\\nWhile the paper has some interesting ideas, it seems unfit for publication in its present form.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Sorry for the late response!\", \"comment\": \"Dear reviewers,\\n\\nWe are sorry for our late response! We were trying our best to come up with a better version of the paper. In this revision, we have changed the structure of writing, moved some discussions into the appendix. Also, to address your concerns about the mechanism of our regularizer, we have also added an illustrative example into the appendix to show that our regularizer does not necessarily kill the unstable neurons. We have also added new experimental results, and now achieve state-of-the-art certified accuracies on both MNIST and CIFAR10. We hope you love this version!\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you for you acknowledgement of our work! We have made a major revision to improve our presentation, and hope you still love this version! In fact, we already had experimental results on CIFAR10 in our previous version. In this revision, we have added more results on CIFAR10 comparing with stronger baselines such as IBP and CROWN-IBP. We are able to achieve the best certified accuracy under both $\\\\epsilon=2/255$ and $\\\\epsilon=8/255$. While we have not expanded the experiments to even more datasets, we will try our best to do so in the future.\"}",
"{\"title\": \"Thank you for your feedback! [2]\", \"comment\": \"> \\\"- * The crux of the contribution seems to rest on the premise that identifying the optimal perturbation in the input space with the relaxed model, \\u2026 In general, it seems very unclear why this should work based on the evidence presented in the paper. Specifically with the relaxation, it might not even be guaranteed \\u2026 that the value of \\\\delta_0^* that is found from problem C is even going to lie inside the L\\\\inf norm ball around the point x, for example. Thus it is not clear to me if this is an approach for verification or a regularizer based on verification.\\\"\\nFirst, it is guaranteed that the value of $\\\\delta_0^*$ found from problem $\\\\mathcal{C}$ satisfies the norm ball constraint, since this constraint is contained in Eq. ($\\\\mathcal{C}$) and we are finding the optimal solution for $\\\\mathcal{C}$ that satisfies all its constraints. \\nOur approach is for training a network that has better verified robustness. It is a regularization based on Fast-Lin, which also improves the tightness of CROWN-IBP as well as other convex relaxations for ReLU networks. We aim to train a neural network that can be better verified by these bounds. Meanwhile, but enforcing a tighter bound during training, it can also alleviate the problem of over regularization caused by loose bounds in theory. It enforces the convex relaxation to be tight for samples from the data distribution, and improves the verified accuracy on test set empirically.\\n\\n\\n> \\\"- 1. Is the method of Wong et.al. using the looser convex relaxation (used here) or the tight convex relaxation when reporting the numbers in Table. 1? \\\"\\nAll the results in Table 1 are verified with Fast-Lin or equivalently Wong et al.\\u2019s bound. The optimal layer-wise convex relaxation is hardly applicable to the networks in Table 1.\\n\\n> \\\"- 2. If the optimal convex relaxation can be used to construct the same regularizer as the one proposed here, it would be good to evaluate how well that does.\\\"\\nIn theory, for the optimal layer-wise convex relaxation, our regularizers can be applied, and only the second regularizer needs to be adapted to allow $x_{ij}\\u2019$ to lie on the line of ReLU constraint (left of Figure 1) when $\\\\delta_{ij}^*=0$. However, as we have mentioned before, this relaxation is too expensive to be integrated into the training process. See the experimental results in the CROWN paper (Efficient Neural Network Robustness Certification with General Activation Functions), where LP-Full as referred to in the paper is orders of magnitude more expensive than CROWN, and often fail to converge in a reasonable amount of time. Even CROWN is already expensive enough.\"}",
"{\"title\": \"Thank you for your feedback! [1]\", \"comment\": \"Thank you for your acknowledgement of our work and the valuable feedback! We have revised our paper and hope you love the current version. Below we try our best to address your specific concerns.\\n\\n> \\\"- *Sec. 4.1: Eqn. (O) does not have a convex relaxation, it is the exact problem which is intractable. Why are we comparing the optimal values of p*(O) and p*(C)? \\u2026 In general, in Sec. 4, it is often unclear whether when we talk about p(O) if we are referring to the unrelaxed original problem or the tightest convex relaxation\\u2026\\\"\\nWe are sorry for the typos and have corrected it in the latest version. Throughout the paper we are trying to minimize the gap between the optimal values of the original non-convex problem $\\\\mathcal{O}$ and its convex relaxation $\\\\mathcal{C}$ on each individual training sample. We have shown with the new illustrative example in Appendix A that the gap between $p^*_{\\\\mathcal{O}}$ and $p^*_{\\\\mathcal{C}}$ can be 0 for a large portion of samples, and with empirical results on practical networks and datasets, we have shown that minimizing such gap improves robustness. \\n\\n> \\\"- It is not clear how/ why the proposed method of relaxing (which by the way seems identical to Fast-Lin (Weng et.al.) is better than the optimal convex relaxation. Would this not lead to looser bounds? Is that the thing we are looking to investigate? .... Perhaps it would be good to argue the proposed regularizer in this work cannot be constructed with the optimal convex relaxation. Is that true? A discussion on this would be helpful.\\\"\\n\\nWe have revised Proposition 1 to prove that when the same condition holds, i.e., $r=0$, both CROWN and Fast-Lin are tight. In the same way, the optimal layer-wise convex relaxation will also be tight. Still, we would like to make some clarifications here:\\n1) We are not proposing a new convex relaxation method in the paper and try to beat the optimal layer-wise convex relaxation (referring it to LP-All). Instead, we propose two regularizers based on over observations from Fast-Lin that can be used on top of established convex relaxation bounds to train certifiably robust ReLU networks, and demonstrate an improved certified accuracy. The regularizers can be easily extended to different forms of convex relaxations, including CROWN and CROWN-IBP, and we have demonstrated the improvements in our experiments.\\n2) Theoretically, since the feasible set of LP-All is a subset of Fast-Lin, LP-All is tighter than Fast-Lin. However, the benefit of our regularizer lies in obtaining a model that can be better certified by the looser Fast-Lin, not from introducing a theoretically tighter bound. If we use LP-All to verify the models obtained with our regularizers, we could get even better results. \\n3) More specifically, in our experiments, we compare the improvements in certified robust accuracy by either (1) train the model (a small 2-hidden-layer MLP) using our regularizers and using Fast-Lin to verify, or (2) verifying the model trained with Wong et al. (equivalent to Fast-Lin) by using LP-All instead of Fast-Lin (numbers taken from Salman et al. (2019)). Our regularizers results in networks that can be better certified by the looser Fast-Lin, and the improvement is comparable to using the much more expensive LP-All to provide a better bound for models trained without our regularizers. \\n4) For LP-All, such regularizers can also be applied, but since LP-All is an impractical approach for training robust networks, it is hard for us to provide any experimental results. Without developing anything new, the first regularizer can be applied directly to LP-All by minimizing the difference between the lower bounds of margin given by LP-All and the margins obtained by plug in the \\u201coptimal\\u201d perturbation. \\n5) For the second regularizer, one can still compute it from the Fast-Lin solutions and minimize it. If the conditions in Proposition 1 is satisfied for the Fast-Lin bounds, the gap will also vanish for LP-All, since the three points (three red dots in the right of Figure 1 in the revised version) are also feasible for LP-All. Further, one can identify the optimal solutions for the unstable neurons $x_{ij}^*$ of LP-All, and try to minimize the distance between $x\\u2019_{ij}$ computed by the ReLU network when the input is $x+\\\\delta_0^*$, but it is unclear whether such a process is easily differentiable (w.r.t. the parameters of the network).\"}",
"{\"title\": \"Thank you for your feedback! [2]\", \"comment\": \"> \\\"- ...The three points they identify are the only feasible optimal solutions for solving a linear program over the feasible domain given by the relaxation of one ReLU but, when solving over the whole of C, the solution needs to be on the boundary of the feasible domain of C, which is larger than those three points.\\\"\\nThank you for the careful check. We were actually referring to the layer-wise convex relaxation, Fast-Lin, instead of any convex relaxation. We have made an edit and the current version emphasizes the unstable neurons are independent. For Fast-Lin, the solution to each unstable neuron can be considered independently, and the intersection of their feasible set with the original ReLU constraint are the three points. It is sufficient to consider the three points for each unstable neuron to check whether the relaxed solution of Fast-Lin is feasible for the non-convex problem. It is also a sufficient condition for CROWN to be tight. We have added this conclusion in Proposition 1.\\n\\n> \\\"- For Proposition 1, the condition x \\\\in S(\\\\delta) means that all the intermediate bound in the network must have been tight (so that the actual forwarding of an x can match the upper or lower bound used in the relaxation), and that the optimal solution of the relaxation requires all intermediate points to be at either at their maximum or their minimum. The only case I can visualise for this is essentially once again the case where there are no ambiguous ReLU and the full thing is linear..\\\"\\nThis assumption is not strong at all for certain samples and networks, as can be seen from Appendix A, where the conditions in Proposition 1 can be satisfied for a large portion of the data distribution even when unstable neurons exist and the network is nonlinear in the norm ball. Also, we are only matching the optimal solution of $\\\\mathcal{C}$ inside the norm ball with the corresponding solution in $\\\\mathcal{O}$, not enforcing equivalence in the whole norm ball. So and the ReLU network does not have to be linear inside the whole norm ball.\\n\\n> \\\"- Regarding the experiments section, it would be benefical to include in table 1 the results of Gowal et al\\u2026. for better context. \\\"\\nWe have applied our regularizer to CROWN-IBP, which now achieves better results than both the CROWN-IBP baseline and IBP (Gowal et al.). The results has been added into Table 1.\\nOur method is orthogonal to CROWN-IBP, and we believe further improvements upon CROWN-IBP is valuable. \\nIn fact, convex relaxation is better than CROWN-IBP in some cases where $\\\\epsilon$ is small, such as when $\\\\epsilon=2/255$ on CIFAR10. We have added the new comparisons into Table 1. \\n\\n> \\\"- I think that the analysis section is pretty confusing and needs to be re-thought. It provides a lot of complex discussion of when the relaxation will be exact, without really identifying that it will be when you have very few ambiguous ReLU. I think that there might be a few parallels to identify between the regularizer proposed and the ReLU stability one of Xiao et al. (ICLR2019) from that aspect. The experimental results are not entirely convicing due to the lack of certain baselines.\\\"\\nAs we have demonstrated in Appendix A, the existence of ambiguous ReLU does not eliminate the possibility that the bound is tight; the bound could still be tight even when ambiguous ReLU exists for certain input samples. Reducing the number of ambiguous ReLUs could improve tightness if we assume the range for other ambiguous ReLUs does not increase, but it is not the other way round, i.e., tightness at the sample distribution does not necessarily mean the elimination of ambiguous ReLU. It only requires the optimal solutions to match at a certain adversarial perturbation, not inside the whole norm ball. Therefore, in theory, our regularizer does not explicitly kill the ambiguous ReLUs, which is different from Xiao et al. (ICLR2019). \\nWe have also added comparisons against stronger baseline methods (IBP, CROWN-IBP) into Table 1.\"}",
"{\"title\": \"Thank you for your detailed feedback! [1]\", \"comment\": \"Thank you for your valuable time and the interesting discussions! We have made major revisions to our paper and hope you like this new version. We have added two sections into the appendix to address your concerns about killing the ambiguous neurons and the $\\\\ell_2$ IBP, and added more experimental results against stronger baselines into Table 1.\\n\\n> \\\"- You can definitely use IBP to verify properties against L2-adversaries. It is simply a matter of changing the way the bound is propagated through the first layer.\\\"\\nIndeed, you are correct: IBP can definitely certify other adversaries by modifying the first-layer propagation. We have corrected the sentence in our paper. However, given the established results, it seems that IBP cannot do it well in some important cases if we only modify the way the bound is propagated through the first layer, such as training robust convolutional networks against l2 adversaries. We have added some discussions in Appendix E to demonstrate that this approach cannot lead to better results (often much worse according to our estimation) than established results. We show that the certified accuracy of this first-layer-$\\\\ell_2$ IBP is at least 27.93 to 37.80 lower than established results of randomized smoothing at $\\\\epsilon_2$=0.25, and at least 18.03 lower (approximately) than the results of the approximated convex relaxation at $\\\\epsilon_2$=36/255 by Wong et al. (2018). We hope Appendix E addresses your concerns, but please let us know if we misunderstood your idea about adapting IBP to other adversaries.\\n\\n > \\\"- the convex relation of Ehlers (middle of figure 1) is not the optimal convex relaxation. It is optimal only if you assume that all ReLU non linearities are relaxed independently.\\\"\\nThank you for pointing this out. We have revised the captions of Figure 1 and other related contents to address this point. Yes, same as in (Salman et al. 2019), the optimal convex relaxation refers to the optimal convex relaxation of the nonlinear constraint $z_{i+1}=\\\\sigma(x_i)$ from each single layer, which is just like the middle of Figure 1 for unstable neurons in ReLU networks. In our previous version, we referenced it as \\u201cthe optimal relaxation for each $j\\\\in \\\\mathcal{I}_i$\\u201d, which is a bit vague but can still be correct if understood as the optimal relaxation for the single constraint given by $j\\\\in \\\\mathcal{I}_i$ if it is considered independently.\\n\\n> \\\"- Eq O is not the optimal convex relaxation, it's the hard non-convex problem.\\\"\\nSorry for the confusion. The typo has been corrected. What we really want to say is to investigate the gap between Eq. O and Eq. C.\\n\\n> \\\"- Equation C is the relaxed version of equation O, so they are only going to be equal if there is essentially no relaxation going on. \\u2026 The only case where this can happen is if all the terms in the sum over $I_i$ are zero, which is essentially going to mean that no ReLU is ambiguous. \\\"\\nNotice for both Eq. C and Eq. O, the input $x$ is a given constant. We are only enforcing the relaxed solution to be equal to the non-convex solution at the training samples, instead of letting these two solutions to be equivalent in the whole input space. There exist specific networks and (a significant portion of) samples where solutions to Eq. C and Eq. O are equivalent but ambiguous ReLUs still exist (see the illustrative example in Claim 1 of Appendix A), and by using such a regularizer we expect the bound to be tight for samples from the data distribution.\\n\\n> \\\"- Page 5, section 4.2: The authors suggest minimizing d \\u2026 this would amount to maximizing the lower bound (which all verified training already does), at the same time as minimizing the value of the margin (p_O) on a point of the neighborhood for which we want robustness (x + delta_0). Minimizing the value of the margin is the opposite of what we would want to do, so I'm not surprised by the observation of the author that this doesn't work well. \\\"\\nBy \\u201cdoesn\\u2019t work well\\u201d, we just want to say it does not work as well as the second regularizer when used separately; the first regularizer still improves the results upon the baseline. See the results from both Table 1 and Table 3 for the small model with $\\\\epsilon=2/255$ on CIFAR10. \\nMoreover, minimizing $d$, i.e., $\\\\min (p\\u2019_{\\\\mathcal{O}}-p^*_{\\\\mathcal{C}})$, is definitely different from both minimizing $p\\u2019_{\\\\mathcal{O}}$ and maximizing $p^*_{\\\\mathcal{C}}$ as you suggested, since the solution to the first optimization problem could be the case where both $p\\u2019_{\\\\mathcal{O}}$ and $p^*_{\\\\mathcal{C}}$ are large but the difference $(p\\u2019_{\\\\mathcal{O}}-p^*_{\\\\mathcal{C}})$ is small. \\nIn fact, since we are minimizing the robust cross entropy loss which pushes $p^*_{\\\\mathcal{C}}$ to larger values while minimizing this gap $d$, the model tends to converge to a state where both $p^*_{\\\\mathcal{C}}$ and $p\\u2019_{\\\\mathcal{O}}$ are large.\\nWe have also included a discussion based on the illustrative example at the end of Appendix A.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\nThe aim of the paper is to improve verified training. One of the problem with verified training is the looseness of the bounds employed so the authors suggest incorporating a measure of that looseness into the training loss. It is based on a reformulation of the relaxation of Weng et al.\", \"comments\": \"\", \"page_2\": \"\\\" In addition, a bound based on semi-definite programming (SDP) relaxation was developed and minimized as the objective Raghunathan et al. (2018). (Wong & Kolter, 2017) presents an upper bound\\\" -> citation format\", \"page_3\": \"It's a bit pedantic, but the convex relation of Ehlers (middle of figure 1) is not the optimal convex relaxation. It is optimal only if you assume that all ReLU non linearities are relaxed independently. See the work by Anderson et al. for some examples\\nPage 5, section 4: \\\"We investigate the gap between the optimal convex relaxationin Eq. O\\\" There is a bit of confusion in this section. Eq O is not the optimal convex relaxation, it's the hard non-convex problem.\\nSection 4.1 bothers me. Equation C is the relaxed version of equation O, so they are only going to be equal if there is essentially no relaxation going on. Saying that it's possible to check whether the equivalence exists is a bit weird. The only case where this can happen is if all the terms in the sum over I_i are zero, which is essentially going to mean that no ReLU is ambiguous. (or if the c W are all positives, but that would be problematic during the optimization of the intermediate bounds given that c would make them both signs then)\\nPage 5, section 4.2: The authors suggest minimizing d, the gap between the value of the bound obtained, and the value of forwarding the solution of the relaxation through the actual network. Essentially, this would amount to maximizing the lower bound (which all verified training already does), at the same time as minimizing the value of the margin (p_O) on a point of the neighborhood for which we want robustness (x + delta_0). Minimizing the value of the margin is the opposite of what we would want to do, so I'm not surprised by the observation of the author that this doesn't work well.\\nThe conclusion of the section that d can not be optimized to 0 also seems quite obvious if you think about what the problem is.\\n\\nSection 4.3:\\n\\\"the optimal solution of C can only be on the boundary of the feasible set.\\\" -> There is a subtlety here that I think the authors don't address. The three points they identify are the only feasible optimal solutions for solving a linear program over the feasible domain given by the relaxation of one ReLU but, when solving over the whole of C, the solution needs to be on the boundary of the feasible domain of C, which is larger than those three points.\\n\\nThe whole section is quite convoluted and makes very strong assumption. For Proposition 1, the condition x \\\\in S(\\\\delta) means that all the intermediate bound in the network must have been tight (so that the actual forwarding of an x can match the upper or lower bound used in the relaxation), and that the optimal solution of the relaxation requires all intermediate points to be at either at their maximum or their minimum. The only case I can visualise for this is essentially once again the case where there are no ambiguous ReLU and the full thing is linear.\\n\\nRegarding the experiments section, it would be benefical to include in table 1 the results of Gowal et al. (On the Effectiveness of Interval Bound propagation for Training Verifiably Robust Models) for better context. The paper is already cited so it should have been possible to include those numbers, which are often better than the ones reported here.\\nThe comparison is included in table 2, when the baseline is beaten, but this is with using the training method of CROWN-IBP and it seems like most of the improvements is due to CROWN-IBP.\\n\\nTypos/minor details:\", \"page_8\": \"\\\"CORWN-IBP \\\"\", \"opinion\": \"I think that the analysis section is pretty confusing and needs to be re-thought. It provides a lot of complex discussion of when the relaxation will be exact, without really identifying that it will be when you have very few ambiguous ReLU. I think that there might be a few parallels to identify between the regularizer proposed and the ReLU stability one of Xiao et al. (ICLR2019) from that aspect. The experimental results are not entirely convicing due to the lack of certain baselines.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\nThe paper proposes two new regularizers for adversarial robustness inspired by literature on verification of ReLU neural networks for resilience to epsilon perturbations using convex relaxations. The paper shows empirically that the proposed method leads to better robustness than previous works.\", \"strengths\": [\"The paper seems to have an interesting perspective (with the proposed looser relaxation) of the convex relaxation of an adversary adding noise at every layer in the network\"], \"weaknesses\": \"*Sec. 4.1: Eqn. (O) does not have a convex relaxation, it is the exact problem which is intractable. Why are we comparing the optimal values of p*(O) and p*(C)? The paper from Salman et.al. already shows that there is a convex relaxation barrier, which essentially corresponds to this difference. In general, in Sec. 4, it is often unclear whether when we talk about p(O) if we are referring to the unrelaxed original problem or the tightest convex relaxation. For example, at the start of Sec. 4.1, it seems like we are talking about the convex relaxation and then in Sec. 4.3 it seems like we are talking about the unrelaxed problem.\\n\\n*It is not clear how/ why the proposed method of relaxing (which by the way seems identical to Fast-Lin (Weng et.al.) is better than the optimal convex relaxation. Would this not lead to looser bounds? Is that the thing we are looking to investigate? Making that more clear would be useful. Perhaps it would be good to argue the proposed regularizer in this work cannot be constructed with the optimal convex relaxation. Is that true? A discussion on this would be helpful.\\n\\n* The crux of the contribution seems to rest on the premise that identifying the optimal perturbation in the input space with the relaxed model, and then computing the activations with respect to that and forcing the forward pass to saturate near the margins of the relu polytope (relaxation) is a good idea. In general, it seems very unclear why this should work based on the evidence presented in the paper. Specifically with the relaxation, it might not even be guaranteed (as far as I understand) that the value of \\\\delta_0^* that is found from problem C is even going to lie inside the L\\\\inf norm ball around the point x, for example. Thus it is not clear to me if this is an approach for verification or a regularizer based on verification.\\n\\n* Ultimately, the value of the approach in this context (as per my understanding) comes from the experiments and the results which show that there is increased robustness. It would be great to clarify a couple of details in the experiments:\\n1. Is the method of Wong et.al. using the looser convex relaxation (used here) or the tight convex relaxation when reporting the numbers in Table. 1? \\n2. If the optimal convex relaxation can be used to construct the same regularizer as the one proposed here, it would be good to evaluate how well that does.\\n\\nOverall, I am not an expert in the area but a lot of details from the writing (such as point 1 under weakness) and the theoretical justification of the regularizer are unclear to me. Thus given these (perceived) weaknesses I would lean towards weak rejection. Clarifications on these points would help me revise my score.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"Strengths:\\nThis work proposed two regularizers that can be used to train neural networks that yield convex relaxations with tighter bounds.\\nThe experiments display that the proposed regularizations result in tighter certification bounds than non-regularized baselines.\\nThe problem is interesting, and this work seems to be useful for many NLP pair-wise works.\", \"weaknesses\": \"Some presentation issues.\\nThe dataset, MNIST, is not good enough for a serious research. \\nMore datasets need to be added to the experiments in this paper.\", \"comments\": \"This paper proposes two regularizers to train neural networks that yield convex relaxations with tighter bounds. \\n\\nOverall, the paper solves an interesting problem. Though I did not check complete technical details, the extensive evaluation results seem promising. \\n\\n1. There are some presentation issues that can be addressed. For example, on page 8, the sentence of \\u201cthe family of 10small\\u201d misses a blank space.\\n\\n2. In the experiments, the dataset is not a good one for evaluating the performance of the proposed idea.\\n\\nIn conclusion, at this stage, my opinion on this paper is Weak Accept.\"}"
]
} |
HJgKYlSKvr | Unsupervised Generative 3D Shape Learning from Natural Images | [
"Attila Szabo",
"Givi Meishvili",
"Paolo Favaro"
] | In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image gen- eration is split into 2 stages. In the first stage a generator network outputs 3D ob- jects. In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to over- come them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset. | [
"unsupervised",
"3D",
"differentiable",
"rendering",
"disentangling",
"interpretable"
] | Reject | https://openreview.net/pdf?id=HJgKYlSKvr | https://openreview.net/forum?id=HJgKYlSKvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"u1XPecWHf",
"rkg_nVQjoS",
"B1lAOE7ooB",
"r1g4847sjr",
"B1xF7NmojH",
"SylVFp23KH",
"rkl8cfFotS",
"SJlW0u2EFS",
"HkgFYmnmKS",
"Hygnc4E0dH",
"HJloQWkZur",
"rkeLjGjx_H",
"ryg5_65x_B",
"HkgODgJlOH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_review",
"comment",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1576798749272,
1573758127961,
1573758070158,
1573758027777,
1573757985185,
1571765627787,
1571685005833,
1571240136623,
1571173249400,
1570813076328,
1569939746701,
1569923741858,
1569922418470,
1569874016356
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2443/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2443/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"~Tushar_Jain1"
],
[
"ICLR.cc/2020/Conference/Paper2443/AnonReviewer1"
],
[
"~Bernhard_Egger1"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2443/Authors"
],
[
"~Bernhard_Egger1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a GAN approach for unsupervised learning of 3d object shapes from natural images. The key idea is a two-stage generative process where the 3d shape is first generated and then rendered to pixel-level images. While the experimental results are promising, the experimental results are mostly focused on faces (that are well aligned and share roughly similar 3d structures across the dataset). Results on other categories are preliminary and limited, so it's unclear how well the proposed method will work for more general domains. In addition, comparison to the existing baselines (e.g., HoloGAN; Pix2Scene; Rezende et al., 2016) is missing. Overall, further improvements are needed to be acceptable for ICLR.\", \"extra_note\": \"Missing citation to a relevant work\\nWang and Gupta, Generative Image Modeling using Style and Structure Adversarial Networks\", \"https\": \"//arxiv.org/abs/1603.05631\", \"title\": \"Paper Decision\"}",
"{\"title\": \"detailed answers to reviewers\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your feedback. Before we address the reviewers concerns, we summarize our contributions:\\n1) We address the problem of 3D shape learning from natural images in a fully unsupervised way. \\nThis has never been solved before in our general settings (below we clarify in detail the reviewers concerns on this claim).\\n2) Experiments show our method produces high quality results, which demonstrates the feasibility of \\nthis novel task (in the revised paper we provide additional experiments that reviewers requested).\\n3) We provide theoretical analysis to formalize the conditions under which 3D can be learned without supervision in our settings.\\n\\nTherefore, we believe that our paper would be of great interest to the ICLR community.\\n\\nHere, I address the main complaints:\\n\\n1) NOVELTY:\\nThere are many papers on \\\"unsupervised\\\" learning, generative 3D models and disentangling. But none of them works under our general settings.\", \"we_added_the_mentioned_relevant_papers_to_the_table\": \"[1] Rezende et. al., \\\"Unsupervised Learning of 3D Structure from Images\\\", NIPS 2016.\\n\\t- they learn 3D of synthetic images (we do it for natural images)\\n[2] Rajeswar et. al. \\\"Pix2Scene: Learning Implicit 3D Representations from Images\\\"\\n\\t- they learn 3D of synthetic images (we do it for natural images)\\n[3] Nguyen-Phuoc, et. al., \\\"HoloGAN: Unsupervised learning of 3D representations from natural images\\\", ICCV 2019.\\n\\t- they do disentangling of viewpoint and object, which allows rendering the object from different viewpoints\\n\\t- their model does not learn an explicit 3D shape representation, but a latent one\\n\\t- their model does not provide a depth map or normals, and cannot be readily used in traditional graphics pipeline\\n\\t- their model does not guarantee that the rendered views are consistent, i.e. there exists a 3D object that has those views\\n\\t- the efficacy of their method has not been proven theoretically\\n\\n2) EXPERIMENTS:\\nWe provide the requested (new) experiments:\\n- FID numbers of the several generators\\n- More detailed ablations (samples in the supplementary)\\n- a trained model on 2 LSUN categories\\n\\n3) THEORY:\\nWhile empirical evidence is necessary to validate an approach, theory is fundamental to point out\\nshortcomings of a method and therefore its further development.\\nIn practical terms, our analysis was paramount in showing us that the method could potentially work \\nand in showing us how to handle the viewpoint selection and the background layer.\"}",
"{\"title\": \"answers\", \"comment\": \"\\\"only on a single dataset\\\"\\nWe added results on LSUN categories.\\n\\n\\\"not compared to any baselines\\\"\\nWe are the fist one to solve this problem, thus a baseline is not available.\\n\\n\\\"only qualitative, consisting of a single example per ablation ... corresponding results is unclear\\\"\\nWe added detailed ablations and quantitative evaluation and explanations.\\n\\n\\\"the distinction between supervised, unsupervised and weakly supervised learning in section 2 in unnecessary\\\"\\nWe find it necessary as the term \\\"unsupervised\\\" is used in many different ways in the literature.\", \"by_clearly_defining_the_terms_we_highlight_that\": \"- the problem we solve is more difficult than the problems solved by prior work\\n- based on our stricter definition of supervision signal, we provide the first \\\"fully\\\" unsupervised solution\\n\\n\\\"unnecessary assumptions ... the renderer R doesn't need to be able to generate perfect images\\\"\\nThe theory investigates if the solution of the optimization task is the desired one.\\nIn our work this analysis requires an ideal renderer R.\\nOnce the theory has been proved, one may also achieve a suboptimal solution \\nby using approximations (eg, an imperfect renderer).\\n\\n\\\"incorrect claims ... Theorem 1 is also incorrect since it does not take e.g. mode collapse into account\\\"\\nMode collapse is taken into account at page 5 under the Assumption 4 of the original submission. \\nAssumption 4 states that the GAN training is perfect, \\\\ie, there is no mode collapse, where the training objective \\nis defined in terms of expectations and not in terms of the training data. \\n\\n\\\"differentiable renderer ... comparisons with baselines\\\"\\n\\nWe added comparisons with the baseline crisp renderer, but further evaluations are beyond the scope of this paper.\"}",
"{\"title\": \"answers\", \"comment\": [\"HoloGAN's representation is latent, and not explicit, thus they do not output any 3D surfaces, unlike our model.\", \"We added results on LSUN categories.\", \"We added more ablation studies.\", \"We will share our code upon publication\"]}",
"{\"title\": \"answers\", \"comment\": [\"We provide FID numbers in the revised paper.\", \"We provide results on LSUN categories. The results are not as good as on the faces, but:\", \"we did not have enough time to tune the algorithm to this new data,\", \"but we believe that eventually it would work well also here as we observe similar\", \"challenges to the faces dataset at the early stages of the algorithm development\", \"LSUN includes outliers, cropped objects, occlusions, high variation in shape,\", \"position and scale\", \"We added detailed ablations.\", \"We added HoloGAN to the related work (also see my answer to all reviewers and the updated paper)\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I thank the authors for the rebuttal and the additional experiments. The additions do partially address my concerns, although not entirely. For instance, the experiments on non-face classes are very preliminary and it is unclear if they work at all (no other views shown). I hope the authors are right that the method will work on other classes after some tuning, but this is not demonstrated in the paper. Overall, I am quite in a borderline mode. I think the paper looks promising and after further improving the experimental evaluation it can become a great publication. But for now the experiments, especially the new ones, look somewhat incomplete and rushed, more suitable for a workshop paper. Therefore, I still lean towards rejection.\\n\\n---\\n\\nThe paper proposes an approach to learning the 3D structure of images without explicit supervision. The proposed model is a Generative Adversarial Network (GAN) with an appropriate task-specific structure: instead of generating an image directly with a deep network, three intermediate outputs are generated first and then processed by a differentiable renderer. The three outputs are the 3D geometry of the object (represented by a mesh in this work), the texture of the object, and the background image. The final output of the model is produced by rendering the geometry with the texture and overlaying on top of the background. The whole system can be trained end-to-end with a standard GAN objective. The method is applied to the FFHQ dataset of face images, where it produces qualitatively reasonable results.\\n\\nI am in the borderline mode about this paper. On one hand, I believe the task of unsupervised learning 3D from 2D is interesting and important, and the paper makes an interesting contribution in this direction. On the other hand, the experimental evaluation is quite limited: the results are purely qualitative, on a single dataset, and do not contain much analysis of the method. It would be great if the authors could add more experiments to the paper during the discussion phase.\", \"more_detailed_comments\": \"\", \"pros\": \"1) The paper is presented well, is easy to read. I like the detailed table with comparison to related works, and a good discussion of the limitations of the method and the tricks involved in making it work. I also like section 4 clearly discussing the assumptions of the work, although I think it could be shortened quite a bit.\\n2) The proposed method is reasonable and seems to work in practice, judging from the qualitative results.\", \"cons\": \"1) The experiments are limited. \\n1a) There are no quantitative results. I understand it is non-trivial to evaluate the method on 3D reconstruction, although one could either train a network inverting the generator, or, perhaps simpler, apply a pre-trained image-to-3D network to the generated images. But at least some image quality measures (FID, IS) could be reported. \\n1b) The method is only trained on one dataset of faces. It would be great to apply the method to several other datasets as well, for instance, cars, bedrooms, animal faces, ShapeNet objects. This would showcase the generality of the approach. Otherwise, I am worried the method is fragile and only applies to very clean and simple data. Also, if the method is only applied to faces, it makes sense to mention faces in the title. \\n1c) It would be very helpful to have more analysis of the different variants of the method, ideally with quantitative results (again, at least some image quality results). Figure 3 goes in this direction, but it is very small and does not give a clear understanding of the relative performance of diferent variants.\\n\\n2) A missing very relevant citation of HoloGAN by Nguyen-Phuoc et al. [1]. It is not yet published, but has been on arXiv for some time. I am a bit unsure about the ICLR policy in this case (this page https://iclr.cc/Conferences/2019/Reviewer_Guidelines suggests that arXiv paper may be formally considered prior work, in which case it should be discussed in full detail), but at least a brief mention would definitely be good.\\n\\n[1] HoloGAN: Unsupervised learning of 3D representations from natural images. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang. arXiv 2019.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"SUMMARY: A new modification to the rendering mechanism in a differentiable renderer to generate 3D images, trained in an unsupervised manner on 2D images of faces\", \"claims\": [\"train a generative model in an unsupervised way on 2D face images,\", \"by generating 3D representation of [shape, texture, background] and feeding that to a differentiable renderer\", \"modify the rendering mechanism to be differentiable wrt vertices of the triangular mesh (in addition to the texture)\", \"curb the problems of training by using shape image pyramid, object size constraint\"], \"lit_review\": \"Well done, sufficient summarization of past work. But not \\\"first\\\":\\nit would be pertinent to mention the ICCV 2019 paper \\\"HoloGAN: UNSUPERVISED LEARNING OF 3D REPRESENTATIONS FROM NATURAL IMAGES\\\" (https://arxiv.org/abs/1904.01326), which tackles the exact same problem. It does not use a triangular mesh representation, or a differentiable renderer, instead it uses a 3D feature representation and a neural renderer. However, the problems tackled are very close to avoid mentioning this paper.\\n\\nHence, it might not be good to claim\\n- in the abstract: \\\"the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way\\\",\\n- in introduction: \\\"For the first time, to the best of our knowledge, we provide a procedure to build a generative model that learns explicit 3D representations in an unsupervised way from natural images\\\",\\n- and similar claims in other places.\\n\\nAlso, Pix2Scene (https://openreview.net/forum?id=BJeem3C9F7) also has similar ideas, although they tackled primitive shapes and not faces.\", \"decision\": \"This paper has very promising results.\\nAlthough it is limited to faces, which the community knows is something GANs are good at modeling because of the inherent structure, it is nevertheless a relevant piece of work in modelling 3D scenes in a graphics way and then training using adversarial learning. I am particularly impressed by the renderings of the depth and texture, and would be interested to explore that area further.\\n\\nHowever, it is more pertinent to check how the model performs objects more complicated than faces. A very simple experiment is to try this on ImageNet images, which are also centered and aligned. This would help investigate the possibility of extending this method to more complicated objects than faces.\\n\\nI would suggest to maybe put more focus on the fact that you have used the traditional graphics pipeline and integrated that into adversarial learning, as opposed to dealing with just weights and biases. That is indeed significant (in my opinion).\\n\\nKnowing that most GAN training time is spent in overcoming a lot of failures, it would be great if the authors can summarize the failure cases and elaborate on the experiments performed to overcome those failures. This was briefly touched upon in Section 7, but it would be great if they could elaborate more on them possibly in the appendix.\\n\\nIt would be great if the authors can share their code, there was no mention of any possibility of this.\", \"additional_feedback\": \"\", \"page_5\": \"...an added constrain*T* on m in the optimization...\\n...fooled by an inverted dept*H* image...\", \"page_6\": \"...rendered <remove>attribute and</remove> depth, attribute and alpha map...\"}",
"{\"comment\": \"Dear Tushar Jain,\\n\\nThank you for your question. Indeed, it is unusual to leave out quantitative comparisons from a machine learning paper. We will explain our reasons below.\", \"quantitative_evaluations\": [\"direct 3D evaluation: Our model can be evaluated quantitatively in 3D reconstruction tasks and semantic key-point matching once it is inverted. In our case this means we should find a mapping from images to the latent representation after which the generator reconstructs the 3D shape.\", \"Inverting our generative model is out of the scope of this paper as it is not a straightforward task and it needs extensive experimental evaluation. We aim to do that in the future using the insights of the latest research papers on inverting GANs. Then we can compare our method with many of the inverted 3DMM models and other supervised techniques.\", \"indirect 2D rendering evaluation: We did not include 2D metrics in the evaluation because our main focus was learning the 3D shape of objects in the training set. We can add inception score and FID in the final version and compare our work with 2D generative models.\"], \"comparisons_with_prior_work\": \"In Table 1 we compare our method with prior SOTA in terms of supervision signals and capabilities. We did not compare our method in terms of performance metrics for the reasons discussed above. Also note that our work is the first unsupervised 3D method for natural images, so it would be unfair to expect it to perform at the level of SOTA supervised methods.\", \"title\": \"answers\"}",
"{\"comment\": \"Hi there,\\n\\nIs there any Quantitative Evaluation (some evaluation metric) and comparison with prior work?\\n\\nThanks.\", \"title\": \"Quantitative Evaluation and comparison with prior work?\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper tries to solve the problem of recovering the 3D structure from 2D images. To this end, it describes a GAN-type model for generating realistic images, where the generator disentangles shape, texture, and background. Most notably, the shape is represented in three dimensions as a mesh made of triangles. The final image is rendered by a fixed differentiable renderer from a randomly-sampled viewpoint. This allows the model to learn to generate realistic 3D shapes, even though it is trained only using 2D images.\\n\\nThe authors introduce a novel renderer based on the Lambertian image model, which is differentiable not only with respect to the texture but also with respect to position on mesh vertices, which allows better shape learning compared to prior art. Authors also identify some learning ambiguities: objects can be represented by the background layers, and sometimes object surface contains high-frequency errors. These are addressed by generating a bigger background and randomly selecting its part as the actual background, and by averaging shapes generated at different scales to smoothen the surface of generated objects, respectively. Authors do mention pitfalls of the model in the conclusion: fixed-topology mesh, the background is not modelled as a mesh, the model works only with images containing a single centred object, the image model is Lambertian.\\n\\nI think that the approach is extremely interesting, addresses an important problem, and shows promising results. However, I vote to REJECT this paper, because the evaluation is insufficient, and the paper lacks clarity.\\n\\nThe approach is evaluated only on a single dataset and is not compared to any baselines. While results from ablations of the model are provided, they are only qualitative, consisting of a single example per ablation, and are hard to read and interpret---in particular, the provided description of the ablations and corresponding results is unclear. There are no quantitative results in the paper, and it is difficult for me to judge how good the method is given only qualitative examples from a single dataset.\\n\\nAs for clarity, I think that the distinction between supervised, unsupervised and weakly supervised learning in section 2 in unnecessary, does not add value to the paper, and can confuse the reader. Section 4 contains some unnecessary assumptions and incorrect claims. For example, the renderer R doesn't need to be able to generate perfect images for the approach to work; I think Theorem 1 is also incorrect since it does not take e.g. mode collapse into account, which prevents the learned distribution from being the same as the data distribution. Section 5 is very unclear, with practically no explanation for equations (6-11), which makes them very difficult to decipher.\\n\\nThe related works section is quite thorough, but the authors missed two extremely relevant papers: [1] and [2], which do a very similar thing and contain some of the ideas used in this paper.\\n\\nI think the paper would be very valuable if the differentiable renderer was clearly explained and more evaluation and comparisons with baselines were provided.\\n\\n[1] Rezende et. al., \\\"Unsupervised Learning of 3D Structure from Images\\\", NIPS 2016.\\n[2] Nugyen-Phuoc et. al., \\\"HoloGAN: Unsupervised learning of 3D representations from natural images\\\", ICCV 2019.\\n\\n\\n=== UPDATE ===\\nI appreciate adding more examples for ablations, FID scores and the LSUN dataset experiments. However, I still think that the exposition could be significantly improved, as eg. the description of the differentiable renderer is difficult to follow -- the equations should be better explained. Also, Figure 3. is difficult to understand; and the caption saying that \\\"one rectangle overlaps another\\\" is not helpful. \\n\\nI think this is really cool work, but due to the lack of clarity, I think it shouldn't be accepted at this conference. Having said that, I am increasing my score to \\\"weak reject\\\" because of the improvements.\"}",
"{\"comment\": \"Thanks for the clarification - great work!\", \"title\": \"Thank you\"}",
"{\"comment\": \"Dear Bernhard Egger,\\n\\nThank you for your interest in our work. You can find our answers below\\n\\n- Initialization: You can find the answer in my other comment.\\n\\n- Correspondences: One can establish semantic correspondences by using the vertex IDs, i.e the same vertex should represent for example the corner of the eye in multiple generated samples. However these correspondences are not guaranteed by the theory because of the ambiguity in the parametrization. When interpolating between different samples, the location of semantic features might move smoothly from one vertex to another. The random samples show that in practice the vertices are more or less aligned with the semantic keypoints, as the the same facial features are generated on the same places on the texture map across instances.\\nOne can test how well our method learns correspondences quantitatively by inverting the generator (and renderer) and backproject the ground truth keypoint locations to the mesh. Inverting the generator is a work in progress, and we aim to show results on correspondences in future work.\\n\\n- Related work: Thank you for pointing us to these relevant papers.\", \"title\": \"answers\"}",
"{\"comment\": \"Dear Readers and Reviewers,\\n\\nWe would like to correct some typos and add some details we left out from the paper.\\n\\n- Section 5: The fixed renderer equations are:\\n\\n\\\\begin{align}\\nz_c(\\\\vp, T) = &~\\\\textstyle \\\\1\\\\{d(\\\\vp, T)=0\\\\} \\\\sum_{i=1}^3 b_i(\\\\vp,T) z_i(T) + (1-\\\\1\\\\{d(\\\\vp, T)=0\\\\}) z_{far} \\\\\\\\\\na_c(\\\\vp) = &~ \\\\textstyle\\\\1\\\\{ z_c(\\\\vp, T_c^*(\\\\vp)) < z_{far})\\\\} \\\\sum_{i=1}^3 b_i(\\\\vp,T^*_c(\\\\vp)) a_i(T^*_c(\\\\vp)) \\\\\\\\\\n\\\\alpha_c(\\\\vp) = &~ \\\\textstyle\\\\1\\\\{ z_c(\\\\vp, T_c^*(\\\\vp)) < z_{far})\\\\}\\n\\\\end{align}\\n\\\\begin{align}\\nz_s(\\\\vp, T) =~& \\\\textstyle \\\\1\\\\{0<d(\\\\vp, T)<B\\\\} \\\\sum_{i=1}^3 b_i(\\\\vp^*(T),T) z_i(T) + \\\\\\\\\\n& \\\\quad\\\\quad\\\\quad(1-\\\\1\\\\{ 0 < d(\\\\vp, T)<B\\\\}) z_{far} + \\\\lambda_{slope} d(\\\\vp, T) \\\\nonumber\\\\\\\\\\na_s(\\\\vp) =~& \\\\frac{\\\\sum_{T} \\\\1 \\\\{ z_s(\\\\vp, T) < z_c(\\\\vp, T^*_c(\\\\vp)) \\\\} \\\\sum_{i=1}^3 b_i(\\\\vp^*(T),T) a_i(T) } \\n{ \\\\sum_{T} \\\\1 \\\\{ z_s(\\\\vp, T) < z_c(\\\\vp, T^*_c(\\\\vp)) \\\\} } \\\\\\\\\\n\\\\alpha_s(\\\\vp) =~& \\\\textstyle\\\\max_{T} \\\\1 \\\\{ z_s(\\\\vp, T) < z_c(\\\\vp, T^*_c(\\\\vp)) \\\\} (1-d(\\\\vp,T) )/B,\\n\\\\end{align}\\n\\n- Section 7: the shape image is downsampled, not the texture image\\n- Section 7: the discriminator architecture is the vanilla StyleGAN and trained with default settings.\\n- Section 7: Initialization: We multiplied the shape image (the raw output of the generator) by 0.002, which effectively sets a relative learning rate, then added s_0 to the output as an initial shape. We set s_0 to a sphere with a radius r = 0.5 and centered at the origin.\", \"title\": \"some details and fixed typos\"}",
"{\"comment\": \"Just some comments, not a proper review:\\nThis paper looks very interesting and novel!\\nI did not read it in full detail yet - my main question would be in the direction of how the learning is initialized?\\nAnd, can you reasonably interpolate between instances? Does the generator learn meaningful correspondences?\\n\\nI think some of the following works might be worth adding to the table?\\n- Thomas J Cashman and Andrew W Fitzgibbon. 2012. What shape are dolphins? building 3d morphable models from 2d images. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1 (2012), 232\\u2013244.\\n\\n- Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib,\\nHans-Peter Seidel, Patrick P\\u00e9rez, Michael Zollhoefer, and Christian Theobalt. 2019.\", \"fml\": \"Face Model Learning from Videos. In Proc. IEEE Conference on Computer Vision\\nand Pattern Recognition (CVPR).\\n\\n- Luan Tran, Feng Liu, and Xiaoming Liu. 2019. Towards High-fidelity Nonlinear 3D\\nFace Morphable Model. In Proc. IEEE Conference on Computer Vision and Pattern\\nRecognition (CVPR).\", \"title\": \"initialization and literature\"}"
]
} |
S1eYKlrYvr | Diagnosing the Environment Bias in Vision-and-Language Navigation | [
"Yubo Zhang",
"Hao Tan",
"Mohit Bansal"
] | Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments which the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons of this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models. | [
"vision-and-language navigation",
"generalization",
"environment bias diagnosis"
] | Reject | https://openreview.net/pdf?id=S1eYKlrYvr | https://openreview.net/forum?id=S1eYKlrYvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"C3nTMfm9pg0",
"EiEgGbUbFS",
"3aIN3Jc5OE",
"BylcucJ2sS",
"SyefQahjiS",
"ryx96Khsjr",
"BJgFU69ijB",
"Bkg8e9SqsS",
"rylViYHcsr",
"HJg32DScjS",
"SJxImLB9oH",
"HyeuvT_0Kr",
"S1gbNt0nYr",
"r1l12CqnYr"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1588965099193,
1579813670134,
1576798749242,
1573808753858,
1573797146362,
1573796290323,
1573789009343,
1573702125552,
1573702044358,
1573701555650,
1573701150075,
1571880287823,
1571772712724,
1571757735322
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2442/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2442/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2442/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"(FYI) Updated version of this paper has been accepted to IJCAI 2020\", \"comment\": \"IJCAI 2020 arxiv link: https://arxiv.org/abs/2005.03086\"}",
"{\"title\": \"Final comments and clarification\", \"comment\": \"We are the authors of the submission \\u201cDiagnosing the Environment Bias in Vision-and-Language Navigation\\u201d. First, we would like to thank PC/AC\\u2019s effort for managing the reviewing process of our paper and all the reviewers\\u2019 efforts and suggestions. However, just to clarify, we address below the concerns/misunderstanding raised by a reviewer regarding tuning the models on the \\u201cunseen-validation\\u201d data and not including \\u201ctest\\u201d results in the paper:\\n\\n(1) First, to address the concern about the performance on the test set, note that due to the #submissions limit of the testing server (https://evalai.cloudcv.org/web/challenges/challenge-page/97/overview), which limits every paper to test the model only a few times, hence all existing analysis papers in this area ([Thomason et al., NAACL 2019a] and [Hu et al., ACL 2019]) did not report the test results since analysis papers usually need multiple approaches to be tested and compared. For the same reason, we didn\\u2019t report test numbers in our original paper (and instead we use the val-unseen set, which could be viewed as a public test set that allows checking the generalization to new unseen environments that are separate from the training and val-seen environments). And note that all our learned and non-learned semantic-feature methods are fairly tuned on the same val-unseen.\\n\\nHaving said that, during the rebuttal period, we already showed the result of the selected methods (i.e., learned-semantic-feature method) on the test set (which in turn again has completely different environments from val-unseen) in the reply to AnonReviewer3 (https://openreview.net/forum?id=S1eYKlrYvr¬eId=BylcucJ2sS) and the generalization improvement on test is still substantial, showing the effectiveness of the initial proposed solution. \\n\\n(2) Second, tuning the models on the validation-unseen set is a fair and comparable approach in the tasks we are studying, which is adopted by the majority of the previous works. We have the confirmation from the author of the original Room-to-Room dataset in May 2018 to \\u201cchoose the best model using the val-unseen split\\u201d.\\n\\n(3) Third, to further prove the validity of our \\u2018learned-semantic\\u2019 method, we re-train our MLP (which learns the semantic features) and tune it on val-seen (as opposed to the original version of tuning on val-unseen). Specifically, we randomly split the training environments and use 10 scenes as the new validation set and the rest as the new training set. The results with the re-trained features are consistent with our original conclusion (shown as the table below), that the \\u2018learned semantic\\u2019 features decrease the performance gaps while providing good val-unseen results. It also indicates that the tuning method used in our paper is not the major factor that contributes to our model\\u2019s performance.\\n\\nDataset method Val-seen Val-unseen Gap\\n----------------------------------------------------------------------------------\\nR2R Our baseline 56.1 47.5 8.6\\n Old learned-semantic 53.1 53.3 0.2\\n New learned-semantic 52.6 53.3 0.7\\n----------------------------------------------------------------------------------\\nR4R Our baseline 54.6 30.7 23.9\\n Old learned-semantic 36.2 36.1 0.1 \\n New learned-semantic 38.0 34.3 3.7\\n---------------------------------------------------------------------------------\\nCVDN (**updated with panoramic-view)\\n Our baseline 6.60 3.05 3.55\\n Old learned-semantic 5.74 4.31 1.43 \\n New learned-semantic 5.82 4.42 1.41\\n\\n(4) Lastly, our paper is primarily focusing on the diagnosis of the \\u2018performance gap\\u2019 and the generalization issue in the vision-and-language navigation area and not trying to beat state-of-the-art or play the leaderboard game. In Sec. 3 of our paper, we discussed our observations of \\u2018performance gap\\u2019 in VLN area, and in Sec. 4 and 5, we designed elaborated diagnosis experiments to analyze where this \\u2018bias\\u2019 is located and the reason that causes this phenomenon. To finalize the above analysis, provide initial possible solutions hopefully leading to useful future works, in Sec. 6 we introduce three kinds of semantic features. All the result numbers reported in the paper support our observations and analysis, and the test result of one of the semantic features concerned by the reviewers is also consistent with our story of helping generalization, and the re-split results are also consistent with our original story.\"}",
"{\"decision\": \"Reject\", \"comment\": \"The submission is a detailed and extensive examination of overfitting in vision-and-language navigation domains. The authors evaluate several methods across multiple environments, using different splits of the environment data into training, validation-seen, and validation-unseen. The authors also present an approach using semantic features which is shown to have little or no gap between training and validation performance.\\n\\nThe reviewers had mixed reviews and there was substantial discussion about the merits of the paper. However, a significant issue was observed and confirmed with the authors, relating to tuning the semantic features and agent model on the unseen validation data. This is an important flaw, since the other methods were not tuned in this way, and there was no 'test' performance given in the paper. For this reason, the recommendation is to reject the paper. The authors are encouraged to fairly compare all models and resubmit their paper at another venue.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"About test-unseen results\", \"comment\": \"Thanks for your quick reply as well! The test-unseen split (named as 'test' in some materials) is hidden and the test-unseen environments are different from the val-unseen environments. To avoid \\u2018fine-tuning\\u2019 on the hidden test set, the testing server only allows very submission. Thus previous analysis papers [Thomason et al., NAACL 2019a] (Table 1&2 in https://arxiv.org/abs/1811.00613) and [Hu et al., ACL 2019] (Table 1, 2, &3 in https://arxiv.org/pdf/1906.00347.pdf) only show val-seen and val-unseen results without testing the model performance on test-unseen data because analysis papers usually have multiple results. We do not test and show the test-unseen results in our paper for the same reason.\\n\\nIn order to address the \\u2018potential methodological concern\\u2019 as you pointed out, since the best way to verify the validity of our \\u2018learned semantic features\\u2019 is comparing its results with the \\u2018baseline ResNet\\u2019 results on test unseen, we submit the predictions of our neural agent model with ResNet features (\\u2018Baseline-ResNet\\u2019) and \\u2018learned semantic features\\u2019 (\\u2018Semantic-Learned\\u2019) to the test server. The success rates are 46.0% and 51.2%. The 5.2% improvement on this \\u2018true-unseen\\u2019 test set shows that our learned semantic features significantly outperforms the original ResNet features.\"}",
"{\"title\": \"Further clarification about training and hyperparameter tuning\", \"comment\": \"Thanks for your quick reply! Both of the papers you refer to in the previous comment, [Fried et al., NeurIPS 2018] and [Tan et al., NAACL 2019], include performance on an independent Test set (also from unseen environments). This is so that it is clear that the performance of the network is not being fine-tuned (via the hyperparameters) to the validation datasets, which are being used during construction of the network. Do you have results on such a Test set available? If not, this is a potential methodological concern.\"}",
"{\"title\": \"Hyper-parameters of the MLP are tuned based on val unseen\", \"comment\": \"Thanks! We tune the hyper-parameters of the MLP (which learns the semantic features) w.r.t the loss on val-unseen environments for two reasons:\\n1. The MLP generates learned semantic features as the input of the neural agent model. The agent models are tuned on the val-unseen environments to help build generalizable agents, we thus also tuned the MLP for this purpose since it could be considered as a part of the neural agent model. \\n2. Val-seen and training data of the MLP (semantic features) are from the same set of training environments. \\n\\nBy the way, tuning an additional module (such as an MLP in our paper) based on val unseen is also adopted by multiple previous works, e.g., [Fried et al., NeurIPS 2018] and [Tan et al., NAACL 2019] tune the performance of an additional speaker module on the val-unseen set. This strategy is considered as fairly comparable with other methods\"}",
"{\"title\": \"Clarification question about hyperparameter tuning\", \"comment\": \"Thank you for taking the time to prepare such thorough responses to my comments. I am still considering some of the earlier comments, though I have a clarification question about your comment: \\\"The drop of val-seen results is a combination...\\\" Could you please clarify (in the comments) how the hyper-parameter tuning was performed? My assumption was that you trained on the training data, and tuned the parameters to maximize performance on Val Seen (and not Val Unseen); is this correct?\"}",
"{\"title\": \"Response to Blind Review #1:\", \"comment\": \"We thank the reviewer for appreciating our detailed analysis and contributions to the community. To make sure the reproducibility of this paper and lead to future research in VLN, we will publicly release the features and our code when the anonymous period ends. Furthermore, we will also provide a unified script that could convert most existing Github projects with Matterport3D environments to use the semantic features. Some example projects are here (sorted by time):\", \"https\": \"//github.com/mmurray/cvdn\\nAfter running the script, all the code above will run with its own model but with our provided features. \\n\\n- Details of learned features:\\nThanks for the suggestions. We have clarified the \\u2018learned semantic features\\u2019 in the Appendix of the revised pdf version. The multi-layer perceptron is a separate module from the neural agent model, which has three FC layers with ReLU activation, projecting 2048-ResNet features to 42-dimension semantic features. We trained it to directly predict the area of each semantic class (as the \\u201cground truth\\u201d semantic features) instead of building the semantic segmentation first. Then the model is frozen and used to generate the semantic features of seen and unseen environments, which will be the input of the navigational agent in replace of ResNet features. We do not consider it as an auxiliary task but it would be a good future direction to work on.\"}",
"{\"title\": \"Response to Blind Review #3 (part 2 of 2):\", \"comment\": \"- About the word 'bias':\\nThanks for the suggestions. As you mentioned, \\u201cenvironment bias usually implies that the training and test sets (or some subset of the data) are distinct in some way, that they are drawn from different distributions\\u201d; and in fact this is the case for Room2Room dataset, where the val unseen is created from completely different/unused house environments that have not been included in training/val-seen at all (see Sec. 4.4 and Figure 9 in Anderson et al. 2018b). Each individual environment is more like a small \\u2018domain\\u2019 with its own distribution to generate the views, the object layouts, and connectivity graphs (please see Fig. 9 in Anderson et al. 2018b). Also, note that we are not blaming the data collection at all. In fact, this 'bias' is naturally (deliberately) conveyed by the way that the validation sets are created in the R2R/VLN papers (with one set using training environments and the other using unique unseen environments), and created as an important factor to test the models\\u2019 generalization ability in follow-up works. Even with more training data (distinct from the unseen environments), these two kinds of environments will still be distinctive to each other, thus the seen vs. unseen bias will still exist (e.g., see Sec. 7.2 and Fig. 5 of Tan et al., 2019).\\n\\nBesides these above reasons, we use the word \\u2018bias\\u2019 instead of \\u2018overfitting\\u2019 to avoid misleading the reader:\\n1. As we mentioned in the above response \\u201cPerformance-Gap 2\\u201d, \\u2018overfitting\\u2019 is mostly an unavoidable phenomenon in training deep neural networks but it is not the same case to the environment \\u2018bias\\u2019. \\n2. \\u2018Overfitting\\u2019 is between the training data and the validation data while environment \\u2018bias\\u2019 is between two validation sets: val seen and val unseen. The overfitting also exists in our neural agent models where it could achieve 80~90% on training data (not val seen). \\n\\n\\n- Details for learned semantic features:\\nThanks for the suggestions. We have added implementation details in the Appendix of the updated version. The multi-layer perceptron is three FC layers with ReLU activation, projecting 2048-ResNet features to 42-dimension semantic features. \\n\\nThe drop of val-seen results is a combination of two reasons. Firstly, we tuned the hyperparameter of the multi-layer perceptron to prevent overfitting the ground truth semantics features in training environments, thus the learned semantic features on the training environment (i.e., the val-seen data) are different from ground truth. Secondly, we use predicted features as the input for both training environments and unseen testing environments, to keep the feature distributions in both environments consistent. Overall, the val-seen results with the learned semantic features are decreased compared to ground truth semantic features. However, this feature-learning setup gives the highest val-unseen results in our experiments compared to advanced models which could overfit the training environments. \\n\\n- Showing performance gap on Touchdown: \\nAs we mentioned in Sec. 3.3, all the other three datasets (i.e., R2R, R4R, and CVDN) are collected from the same in-door environments, Matterport3D (https://github.com/niessner/Matterport). Thus diagnosis experiments on these three datasets possibly reveal a characteristic of the specific environments (Matterport3D) instead of showing a characteristic of the general VLN task. To be best served as a comprehensive analysis of VLN tasks, we thus show the performance gap on one more navigation dataset Touchdown, which is an outdoor navigation task collected from Google Maps and provides very different environments from Matterport3D. Hence, by showing that current neural agent models on Touchdown dataset are still biased towards their training environments (regions in the city), it makes it more convincing that the performance gap is a universal phenomenon in VLN tasks.\\n\\nThanks for the suggestion to apply semantic features to Touchdown. However, the raw RGB images (as mentioned in the footnote of Sec. 1 and please see https://github.com/lil-lab/touchdown for details) have not been released yet.\\n\\n- Figure captions:\\nThanks for the advice about the figure caption. We have resolved this in the updated version. \\n\\n- Semantic segmentation training:\\nThanks for the advice of semantic segmentation training. Our ground-truth semantic features are designed to be the areas of each semantic class (see Appendix for details). The purpose of our paper is to demonstrate the possibility of utilizing semantic features, we thus use simple MLP to regress these features instead of training a segmenter and then calculating the areas. Moreover, to fully take advantage of the semantic segmentation, the neural agent model needs modifications to adapt this new input and the noisy semantic segmentation (as shown in Fig. 5) need to be incorporated. That is beyond the scope of this analysis paper. We will explore these methods in future work with advanced methods.\"}",
"{\"title\": \"Response to Blind Review #3 (part 1 of 2):\", \"comment\": \"We thank the reviewer for appreciating our informative investigation of the system behavior and comprehensive diagnosis experiments.\\n\\n- Performance Gap\\nThanks for your thoughtful questions regarding the \\u2018performance gap\\u2019. We would like to answer these broad questions point-by-point.\\n\\n1. Val unseen is the major evaluation set. \\nAs we clarified in the abstract and Sec. 1, performance on unseen environment has mainly been evaluated in instruction-guided vision-and-language navigation because these step-by-step instructions are too detailed to navigate seen environments. For example, suppose that I am in my home (which is an example of seen environment that I am already familiar with) and someone is asking for my help in the kitchen. They might say \\u201cPlease come to the kitchen\\u201d instead of \\u201cPlease go outside the bedroom, turn left and face towards the table. Go across the table and enter the door of the kitchen to your right\\u201d. Overall, we would like to control an embodied agent with short, informative instructions in seen environments, and thus the instruction-guided VLN task has limited applications, so most of the existing VLN tasks are only compared on val unseen. \\n\\n2. \\u2018Performance gap\\u2019 measures the generalizability to unseen environments.\\nSince we mainly care about the agents\\u2019 performance in unseen environments, metrics regarding val seen are all considered as diagnosis metrics. The val-seen success rate resembles the training accuracy in other tasks (e.g., image classification). While the training accuracy shows some characteristics of the learning method, it is uncorrelated to model's actual performance: 100% training accuracy and 0% validation accuracy is still considered as a bad result. Thus, the difference between the training accuracy and validation accuracy is more informative, which measures the \\u2018generalizability\\u2019 of the model. Similarly, the \\u2018performance gap\\u2019 in VLN tasks between two validation sets measures the \\u2018generalizability\\u2019 from training environments to unseen testing environment. \\nHowever, different from the \\u2018overfitting\\u2019 in deep learning which seems to be a universal \\u2018benign\\u2019 (Zhang et.al., ICLR 2017) issue, the environment \\u2018bias\\u2019 seems not to be the same case: we show that it could be effectively reduced by semantic features while improving the val-unseen results (shown in Table 3) without changes in model architecture or training procedure. This is one of the main reasons that we take the word \\u2018bias\\u2019 instead of \\u2018overfitting\\u2019 in our paper. \\n\\n[Zhang et.al., ICLR 2017] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. Understanding deep learning requires rethinking generalization. ICLR 2017.\\n\\n3. Diagnosis experiments are designed to locate the reason for this performance gap. \\nSince the meaning of the performance gap arises naturally, the metric is not specifically designed for our diagnosis experiments. We actually conduct experiments to recover the reasons behind the problem of the large performance gap. Therefore, we believe that this is different from the arguments: \\u2018but much of the latter portion of the paper continues to focus on the 'improvement' in the metric they use to diagnose the 'bias\\u2019\\u2019 and \\u2018This metric is instructive for diagnosing which component of the model the overfitting is coming from\\u2019.\\n\\n4. Our methods still optimize the val-unseen results while taking the generalization into consideration. \\nWe agree that \\u2018the raw performance on val unseen data matters the most\\u2019, and this is what we pursued in the paper. As shown in Table 3, a consistent increase on val unseen could be observed. It means that semantic features could improve test results (i.e., val unseen results) while improving the neural model\\u2019s generalization. The paper also clarified that our model \\u2018achieves strong results on testing unseen environments\\u2019 in the Abstract, Sec. 1 Introduction, Sec. 6 Methodology, and Sec. 7 Conclusion. Since the main purpose of this paper is to show the reasons and potential solutions to the performance gap, we thus also emphasize the effectiveness of our method in improving the generalizability. \\n\\nMeanwhile, most regularization methods on preventing overfitting (e.g., dropout and weight regularization) would increase the testing results by hurting the training accuracy/loss. The same thing happens here where the val-unseen success rate increases and val-seen success rate decreases.\\n\\n5. The criteria should be used to motivate newer approaches.\\nWe believe that these experiments and results regarding the performance gap will lead to new methods in VLN tasks. As mentioned by Reviewer #1, they \\u2018would significantly change the focus in this field toward a focus on robust high-level visual representations (as opposed to e.g. better spatial awareness or better language understanding)\\u2019. Our semantic-feature approaches in Sec. 6 are initial attempts in this direction which improve the val-unseen results following the findings in the paper.\"}",
"{\"title\": \"Response to Blind Review #2:\", \"comment\": \"We thank the reviewer for appreciating our thorough analysis and contribution to the community.\\n\\n- BLEU score: \\nThanks for pointing it out. We use the term \\u2018corpus-level BLEU score\\u2019 to indicate that the references come from the whole training corpus instead of instructions in related environments. Therefore, it is indeed equivalent to the \\u2018sentence-level BLEU score\\u2019. We are sorry for the misleading and have modified it to \\u201cBLEU-4 score\\u201d in the updated pdf.\\n\\n- Nits: \\nThanks for the writing suggestions. We have changed the heading and rephrased the sentence in our updated version.\\n\\n- Removing visual features: \\nOur work focuses on giving a comprehensive study of the factors that cause the environment bias. We thus wrote the paper in a way so as to not to take credits for the experiments in the papers \\u201cAre you looking\\u201d (Hu et.al, ACL 2019) and \\u201cShifting the Baseline\\u201d (Thomason et.al, NAACL 2019a), who show that removing visual features does not drastically hurt model\\u2019s unseen performance. We instead demonstrate results from a different perspective of generalization: the seen-unseen performance gap is significantly dropped, which supports our hypothesis to eliminate the navigational graph as the dominant reason.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: This paper provides a thorough analysis of why vision-language navigation (VLN) models fail when transferred to unseen environments. The authors enumerate potential sources of the failure--namely, the language, the semantic map, and the visual features--and show that the visual features are most clearly to blame for the failures. Specifically, they show that by removing the low-level visual features (e.g. the fc17 or similar) and replacing with various higher-level representations (e.g. the softmax layer of the pretrained CNN, or the output of a semantic segmentation system) dramatically improves generalization without a meaningful drop in absolute performance.\", \"evaluation\": [\"The paper is easy to follow and interesting. Some results presented have been show previously (e.g. that removing visual features doesn't drastically hurt performance of VLN models) but overall, the paper presents the results in a clear and thorough manner that will be beneficial to the community. A few small questions/comments below.\", \"I am confused by how you compute BLEU in Section 4.1. You say you compute corpus BLEU but Eq. 2 suggests you compute the BLEU for a single instruction against a set of training instructions. I think corpus BLEU is usually corpus vs. corpus (e.g. all generated sentences vs. all reference sentences) not one generated sentence against all reference sentences. Is this right? It also seems odd that your BLEU scores are distributed the way they are (Fig. 2). Can you explain why you did this the way you did?\", \"nit: Sec. 5 heading. Your grammar is backwards. The question you are trying to express is \\\"bias is attributed to what\\\" not \\\"what is attributed to bias\\\". So heading should be \\\"to what inside the environments is bias attributed\\\" (which is admittedly a clunky title)\", \"another nit: \\\"suggest a surprising conclusion: the environment bias is attributed to low-level visual information carried by the ResNet features.\\\" --> idk that this is that surprising, it was kind of natural given the result that removing visual features entirely doesn't hurt performance and helps generalization. So maybe rephrase this sentence.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper has two main contributions. First, the authors perform an extensive study to understand the source of what they refer to as 'environment bias', which manifests itself as a gap in performance between environments used for training and unseen environments used for validation. The authors conclude that of the three sources of information provided to the agent (the natural language instruction, the graph structure of the environment, and the RGB image), the RGB image is the primary source of the overfitting. The second contribution is to use semantic information, compact statistics derived from (1) detected objects and (2) semantic segmentation, to replace the RGB image and provide input to the system in a way that maintains state-of-the-art performance but shrinks the performance gap between the seen and unseen data.\\n\\nThis paper has some pretty exhaustive treatment diagnosing the source of the agent's 'environment bias' (which, as I discuss below, I believe is more accurately referred to as 'overfitting') in Sec. 4. To me, this is this highlight of the paper, and some interesting work; the investigation of the behavior of the system is interesting and informative. It provides a framework for thinking about how to diagnose this behavior and identify its source. The authors use this rather extensive study to motivate the need for new features (semantic features) to replace the RGB image that their investigation finds is where much of this 'environment bias' is located. Unfortunately, it is here that the paper falls flat. The authors proposal methods perform nominally better on the tasks being investigated, but much of the latter portion of the paper continues to focus on the 'improvement' in the metric they use to diagnose the 'bias'. As I mention below, the metric for success on these tasks is performance on the unseen data, and, though an improvement on their 'bias' metric is good anecdotal evidence their proposed methods are doing what they think, the improvements in this metric are largely due to a nontrivial decrease in performance on the training data. Ultimately, this is not a compelling reason to prefer their method. I go into more details below about where I think some of the other portions of the paper could be improved and include suggestions for improvement.\", \"high_level_comments\": [\"I am uncertain that 'bias' is the right word to describe the effect under study. In my experience, environment bias (or, more generally, dataset bias) usually implies that the training and test sets (or some subset of the data) are distinct in some way, that they are drawn from different distributions. The learning system cannot identify these differences without access to the test set, resulting in poor performance on the 'unseen' data. In the scenario presented here, the environments are selected to be in the train/test/validation sets at random. As such, the behavior described here is probably more appropriately described as 'overfitting'. The shift in terminology is not an insignificant change, because using 'bias' to describe the problem incorrectly suggests that the data collection procedure is to blame, rather than a lack of data or an overparamatrized learning strategy; I imagine that more data in the training set (if it existed) could help to reduce the gap in performance the paper is concerned with. That being said, I imagine some language changes could be done to remedy this.\", \"Perhaps the biggest problem with the paper as written is that I am not convinced that the 'performance gap' between the seen and unseen data is a metric I should want to optimize. This metric is instructive for diagnosing which component of the model the overfitting is coming from, and Sec. 4 (devoted to a study of this effect) is an interesting study as a result. However, beyond this investigation, reducing the gap between these two is not a compelling objective; ultimately, it is the raw performance on the unseen data that matters most. The paper is written in a way that very heavily emphasizes the 'performance gap' metric, which gets in the way of its otherwise interesting discussion diagnosing the source of overfitting and some 'strong' results on the tasks of interest. The criteria should be used to motivate newer approaches, rather than the metric we should value for its adoption. This narrative challenge is the most important reason I cannot recommend this paper in its current state.\", \"Using semantic segmentation, rather than the RBG image, as input seems like a good idea, and the authors do a good job of motivating the use of semantics (which should show better generalization performance) than a raw image. However, the implementation in Sec. 6.3 raises a few questions. First (and perhaps least important) is that 6.3 is missing some implementation details. In this section, the authors mention that 'a multilayer perceptron is used' but do not provide any training or structure details; these details should be included in an appendix. More important is the rather significant decrease in performance on the seen data (11% absolute) when switching to the learned method. Though the performance on the unseen data does not change much, it raises some concerns about the generalizability of the learning approach they have used: in an ideal world with infinite training data, the network would perfectly accurately reproduce the ground truth results, and there should be no difference between the two. Consequently, the authors should comment on the discrepancy between the two and the limits of the learned approach, which I worry may limit its efficacy if more training data were added.\"], \"smaller_comments\": [\"I do not fully understand why the 'Touchdown' environment was included in Table 1, since the learned-semantic agent proposed in the paper was not evaluated. The remainder of the experiments are sufficient to convince the reader that this gap exists, and I would recommend either evaluating against the proposed technique or removing this task from the paper.\", \"Figure captions should be more 'self-contained'. Right now, they describe only what is shown in the figure. They should also describe what I, as a reader, should take away or learn from the figure. This is not always necessary, but in my experience improves readability, so that the reader does not need to return to the body of the text to understand.\", \"The use of a multilayer perceptron for the Semantic Segmentation learned features, trained from scratch, stands out as a strange choice, when there are many open source implementations for semantic segmentation exist and could be fine-tuned for this task; a complete investigation (which may be out of scope for the rebuttal period) may require evaluating performance of one of these systems.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims to identify the primary source of transfer error in vision&language navigation tasks in unseen environments. The authors tease apart the contributions of the out-of-distribution severity of language instructions, navigation graph (environmental structure), and visual features, and conclude that visual differences are the primary form in which unseen environments are out of distribution. They show that using ImageNet class scores as visual features results in significantly less transfer gap than using low-level visual features themselves. Experiments then show that semantic-level features dramatically reduce the transfer gap, although at a cost of absolute performance.\\n\\nI recommend this paper for acceptance; my decision is based on the thorough analysis of the ultimate cause of a recurring problem in this field.\\n\\nThese results, if shown to hold across a significant number of datasets and tasks, would significantly change the focus of research in this field toward a focus on robust high-level visual representations (as opposed to e.g. better spatial awareness or better language understanding). This work represents an important step in this direction.\\n\\nThe description of the 'learned' features in 6.3 could use more elaboration. Since it is the best performing approach by a large margin (as measured by transfer gap), it should probably get more than one sentence. In particular, what do the authors mean by \\\"train a separate multi-layer perceptron to predict the areas of these semantic labels\\\"? Does that mean the predicted pixel-level semantic segmentation map is used as input to the navigating agent? Or is it an auxiliary task for representation learning? etc. This should be clarified.\\n\\nI anticipate this paper to significantly influence future work in this area.\\n\\n--------\\n\\nAfter discussing with the reviewers about the methodological issue of the validation set, I have lowered my score to a weak accept, but I think this paper should still be published.\"}"
]
} |
SkluFgrFwH | Learning Mahalanobis Metric Spaces via Geometric Approximation Algorithms | [
"Diego Ihara",
"Neshat Mohammadi",
"Anastasios Sidiropoulos"
] | Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. We consider a formulation of Mahalanobis metric learning as an optimization problem,where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial time approximation scheme (FPTAS) with nearly-linear running time.This result is obtained using tools from the theory of linear programming in low dimensions. We also discuss improvements of the algorithm in practice, and present experimental results on synthetic and real-world data sets. Our algorithm is fully parallelizable and performs favorably in the presence of adversarial noise. | [
"Metric Learning",
"Geometric Algorithms",
"Approximation Algorithms"
] | Reject | https://openreview.net/pdf?id=SkluFgrFwH | https://openreview.net/forum?id=SkluFgrFwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"36OPBUhY3i",
"rJl8AV3osS",
"SJlsP4hior",
"BkxQW4hjsS",
"H1lULCyZ5r",
"ryxFcwpnKr",
"H1xwygViFB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749212,
1573795021551,
1573794914993,
1573794811509,
1572040270153,
1571768208907,
1571663839430
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2440/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2440/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2440/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2440/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method to handle Mahalanobis metric learning thorough linear programming.\\n\\nAll reviewers were unclear on what novelty of the approach is compared to existing work.\\n\\nI recommend rejection at this time, but encourage the authors to incorporate reviewers' feedback (in particular placing the work in better context and clarifying the motivations) and resubmitting elsewhere.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response\", \"comment\": [\"We thank the reviewer for the insightful comments. Below are our responses to the specific points raised:\", \"Our algorithm is the first with a provable guarantee on the number of violated constraints on arbitrary (that is, adversarial) inputs. The machinery for solving LP-type problems is well-known within the computational geometry community, but it had never been applied in the context of metric learning prior to our work.\", \"Prior works are based on minimizing an error function that penalizes violations, which is different than minimizing the number of violations. The benefit of minimizing the number of violations directly is demonstrated in Figure 3. There, it is shown that a simple adversarial input can fool the previous state-of-the-art on the problem. In contrast, our algorithm correctly learns the ground truth.\", \"To the best of our knowledge, the exact complexity of the problem is not known. We suspect that the problem is NP-hard when the dimension d is unbounded.\", \"The phrasing in the proof is confusing. What we mean is that adding a constraint to a feasible instance can make it either feasible or infeasible; adding a constrant to an infeasible instance cannot change its feasibility. We will rephrase this in the final version of our paper.\", \"We thank the reviewer for the insightful comments. Below are our responses to the specific points raised:\", \"Our algorithm is the first with a provable guarantee on the number of violated constraints on arbitrary (that is, adversarial) inputs. The machinery for solving LP-type problems is well-known within the computational geometry comminity, but it had never been applied in the context of metric learning prior to our work.\", \"Pior works are based on minimizing an error function that penalizes violations, which is different than minimizing the number ofs violations. The benefit of minimizing the number of violations directly is demonstrated in Figure 3. There, it is shown that a simple adversarial input can fool the previous state-of-the-art on the problem. In contrast, our algorithm correctly learns the ground truth.\", \"To the best of our knowledge, the exact complexity of the problem is not known. We suspect that the problem is NP-hard when the dimension d is unbounded.\", \"The phrasing in the proof is confusing. What we mean is that adding a constraint to a feasible instance can make it either feasible or infeasible; adding a constraint to an infeasible instance cannot change its feasibility. We will rephrase this in the final version of our paper.\"]}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the insightful comments. Below are our responses to the specific points raised:\\n\\n1. To the best of our knowledge, the exact complexity of the problem is not known. We suspect that the problem is NP-hard when the dimension d is unbounded. For constant d, our methods can be used to obtain an exact (that is, optimal) algorithm with running time n^O(d^2). However, since the running time is prohibitively large even for small d, such a result is mostly of theoretical interest.\\n\\n2. This is a good suggestion. We will change the final version of the paper appropriately.\\n\\n3. This is an omission. The combinatorial dimension is defined to be the maximum cardinality of any basis. We will include this definition to the final version of the paper.\\n\\n4. This is a minor typographical error. The second input should be the empty set. The first line of procedure Exact-LPTML also contains a typographical error. It should be \\\"if B = \\\\emptyset\\\", instead of \\\"if F = \\\\emptyset\\\". We will update the final version of the paper accordingly.\\n\\n5. We will add this clarification to the final version of the paper.\\n\\n6. Our algorithm is the first with a provable guarantee on the number of violated constraints.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the insightful comments. Below are our responses to the specific points raised:\", \"regarding_the_state_of_the_art\": \"ITML and LMNN are the most widely used and cited methods for learning Mahalanobis metrics, and they represent the state-of-the-art for this problem.\\n\\nRegarding the novelty of our contribution. Our paper gives the first polynomial-time algorithm with a provable near-optimal guarantee on the number of violated constraints, and for arbitrary (that is, adversarial) inputs. We remark that minimizing the number of violated constraints is a highly non-convex constraint. Therefore, methods that are based on convex optimization are not directly applicable in our setting.\\n\\nThe paper of Verma and Branson obtains bounds on the sample complexity of learning a Mahalanobis metric, when the input consists of a set of randomly (i.i.d.) chosen labeled pairs of points from an unknown distribution of bounded support. This is different from our setting where the input is adversarial, and the goal is to obtain an algorithm with provably near-optimal error, and provably fast running time. It is also important to note that the notion of error considered by Verma and Branson is different.\\n\\nThe paper of Ye et al. considers the problem of learning a Mahalanobis metric under perturbations of the input. This setting is completely different than the one considered in our paper.\\n\\nRegarding the proof of Lemma 2.1. The phrasing in the proof is confusing. What we mean is that adding a constraint to a feasible instance can make it either feasible or infeasible; adding a constraint to an infeasible instance cannot change its feasibility. We will rephrase this in the final version of our paper.\", \"regarding_figure_4\": \"The running time is not a monotonically increasing function of the dimension because PCA can change the combinatorial structure of an instance in ways that are hard to predict. For example, the minimum number of violations does not need to be a monotonic function of the dimension.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper study the problem of Mahalanobis Metric Spaces learning, which is formulated as an optimization problem with the objective of minimizing the number of violated similarity/dissimilarity constraints.\\n\\nI am not an expert in this subarea. From what I have read, the method is based on sound theory and outperforms some classical methods, including ITML and LMNN on several standard data sets. However it is unclear to me what is the state-of-the-art of this field from this paper and its novelty. \\n\\nSome recent papers might worth discussing and comparing with, e.g.,\\nVerma and Branson, Sample Complexity of Learning Mahalanobis Distance Metrics, NIPS2015\\nYe et al. Learning Mahalanobis Distance Metric: Considering Instance Disturbance Helps. IJCAI\\n\\nIn the proof for Lemma2.1., why \\u201cadding constraints to a feasible instance can only make it infeasible\\u201d? \\n\\nIn Figure 4, why the running time is not a monotonic curve as the dimension increases?\\n\\nThe conclusion of the paper is missing.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper discusses the following problem. Given a set X \\\\subset R^d of points, sets S, D of pairs (S and D denoting similar and dissimilar pairs), numbers u, l, find a matrix G such that\\n- for all pairs (p, q) \\\\in S: ||Gp - Gq|| \\\\leq u, and\\n- for all pairs (p, q) \\\\in D: ||Gp - Gq|| \\\\geq l.\\n\\nNote that such a matrix G may not exist for a given input instance (X, S, D, u, l). So, the relevant problem is maximising the number of constraints. The paper gives an (1+\\\\eps)-approximation algorithm for the maximisation problem. The main idea is defining an LP-type problem and then using a previous result of Har-peled. Some experimental results for a heuristic version are given and compared against other Mahalanobis distance learning algorithms (that may or may not be defined as a maximisation problem). I think the paper ports an interesting result from LP-type problems into the context of distance learning that people may find interesting and may encourage further work.\", \"other_comments\": \"1. I did not find any comment about the computational hardness of the problem. It is always good to the hardness of a problem before evaluating an approximation algorithm for the problem.\\n2. It will be good to define \\u201caccuracy\\u201d before using it in Theorem 1.1.\\n3. Did you define combinatorial dimension before using this in Lemma 2.1?\\n4. What is Exact-LPTML(.) in line 6 of Algorithm 1. The function call should take two inputs.\\n5. When you say that your algorithm is an FPTAS I think you are assuming that the dimension d is a constant. It will be good to make this clear.\\n6. It will good to know what results for minimising the number of constraints are known from past work. The paper mentions some references. It will be much easier if the results that are known about this problem is clearly stated.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a method to handle Mahalanobis metric learning thorough linear programming. The authors consider the specific setup where examples are labelled as similar or dissimilar and the task is to find a mapping such that the feature-space distance between examples is i) smaller than a certain value if the examples are labelled as 'similar' and ii) greater than a possibly different value if the examples are labelled as 'dissimilar'. Arguments from the theory of linear programming are leveraged to define exact and approximated algorithms.\\n\\nI would tend to reject the paper because I do not fully understand where is where is the main novelty. Transforming the problem into a linear programming does not look a very complicated step given the specific setup considered in the paper. Moreover, it is not clear enough if there are computational or theoretical gains in following the proposed approach instead of applying other existing methods. Especially because the provided experiments seem to show that there is no improvement in the accuracy, the authors should have spent some more words to motivate their strategy.\", \"questions\": [\"Are there any theoretical guarantees for the proposed approximation? Is the proposed approximation strategy completely new or similar approaches have been already applied to slightly different setups?\", \"What are the key differences between the proposed method and with other convex approximations for learning Mahalanobis metrics? As the experimental performance of the proposed approach and other existing methods, what are the net advantages to be associated with the geometric approximation?\", \"Why is the approximation needed?\", \"In the proof, why is it true that, for a given solution, adding a constraint implies this constraint is not satisfied?\"]}"
]
} |
rJgPFgHFwr | Laconic Image Classification: Human vs. Machine Performance | [
"Javier Carrasco",
"Aidan Hogan",
"Jorge Pérez"
] | We propose laconic classification as a novel way to understand and compare the performance of diverse image classifiers. The goal in this setting is to minimise the amount of information (aka. entropy) required in individual test images to maintain correct classification. Given a classifier and a test image, we compute an approximate minimal-entropy positive image for which the classifier provides a correct classification, becoming incorrect upon any further reduction. The notion of entropy offers a unifying metric that allows to combine and compare the effects of various types of reductions (e.g., crop, colour reduction, resolution reduction) on classification performance, in turn generalising similar methods explored in previous works. Proposing two complementary frameworks for computing the minimal-entropy positive images of both human and machine classifiers, in experiments over the ILSVRC test-set, we find that machine classifiers are more sensitive entropy-wise to reduced resolution (versus cropping or reduced colour for machines, as well as reduced resolution for humans), supporting recent results suggesting a texture bias in the ILSVRC-trained models used. We also find, in the evaluated setting, that humans classify the minimal-entropy positive images of machine models with higher precision than machines classify those of humans. | [
"minimal images",
"entropy",
"human vs. machine performance"
] | Reject | https://openreview.net/pdf?id=rJgPFgHFwr | https://openreview.net/forum?id=rJgPFgHFwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"GdXSkumVmv",
"SkxuGaE2jH",
"SJeA7cE3sH",
"H1xGd8N3iS",
"S1eqmSN2jH",
"rJxzyUwf9H",
"r1xx7qp5YB",
"rJxR8XGzFH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749178,
1573829903687,
1573829158204,
1573828202387,
1573827874151,
1572136410187,
1571637784290,
1571066709893
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2439/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2439/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2439/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2439/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes and studies a task where the goal is to classify an image that has been intentionally degraded to reduce information content.\\nAll the reviewers found the comparison of human and machine performance interesting and valuable. However the reviewers expressed concerns and noted the following weaknesses: the presented results are not convincing to support our understanding of the differences between human and machine perception (R1), using entropy to quantify the distortion is not well motivated and has been addressed before (R1), lack of empirical evidence (R2). \\nAC suggests, in its current state the manuscript is not ready for a publication. We hope the detailed reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author response to reviews\", \"comment\": \"We thank the reviewers and the chairs for their consideration and feedback. We have responded to each individual reviewer in turn and have submitted a revised version of the paper.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"## Geodesics Of Learned Representations\\n## Olivier J. Henaff & Eero P. Simoncelli\\n\\nWe thank the reviewer for the reference, which we have added to the paper along with others looking at invariance to geometric transformations as mentioned by Review #2. (The other references mentioned were already included and indeed are relevant to our work.)\\n\\n## 1. Please justify the use of entropy to quantify the distortion. What does entropy provide above and beyond just parameterizing the distortion (e.g. image resolution, color saturation)?\\n\\nAs mentioned, there are various papers that apply reductions, perturbations, distortions, transformations, etc., to images, be it for training or evaluation purposes (as mentioned by the paper and the reviews). These works are of clear importance and were an inspiration for us. However we argue that given all of these works, and the others that will continue to emerge, there is a clear need to generalise these approaches: to be able to incorporate, combine, guide and compare different reductions, perturbations, etc., as part of a common, general, well-founded framework. Our work is then guided by a simple question that we believe underlies all such works, but has not been explicitly identified thus far: what information does a given classifier (machine, human, etc.) require for correct classification? And how can we characterise this information? \\n\\nAs a start, in this paper we propose to adopt an information-theoretic framework based on measuring the entropy of the information required and a method to compute minimal-entropy positive images. We show that this can generalise various image reductions, reproduce existing results from the aforementioned works and can generate novel results for various image reductions. We further use it to compare diverse classifiers. \\n\\nWhat we fundamentally wanted to convey with this paper (the \\\"take-home message\\\") is that thinking about the classification problem from an information-theoretic perspective opens up a range of possibilities (some of which are currently being explored from specific angles but without a theory/framework to unite them).\\n \\n## 2. Are there other results above-and-beyond sensitivity to image resolution that distinguishes human and machine in these experiments? These results seem to be largely known by just considering corruptions such as low pass filters, etc. presented in the above papers.\\n\\nThough there are several minor results we could point to, reflecting further on the reviewer\\u2019s question, the result we consider of most importance is that there is a clear correlation between accurate classification (low classification error) and laconic classification (requiring less information/entropy in the input) for the classifiers considered. While perhaps not surprising, this result could potentially have interesting consequences. There are thousands upon thousands of works that explicitly optimise for accurate classification, and tens or hundreds of works that explicitly optimise for robustness, but we are not aware of any works that explicitly optimise for laconic classification. Rephrasing the classification task in this way \\u2013 thinking about how to characterise and reduce the information required by classifiers \\u2013 is, in our opinion, an interesting new take on an old problem. We understand from this review that the paper fails in some respect to convincingly convey our idea, and perhaps that's something for us to improve upon, but it is, at least for us, an intriguing idea we would be excited to have the opportunity to share and discuss with the community at ICLR.\\n\\n## The authors should provide a figure with example images in the main text showcasing how each method corrupts an image.\\n\\nThis is now provided in Appendix A.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"## For the motivation in this paper, the author tries to propose a new perspective to evaluate the robustness of image classifiers. Especially compared with humans\\u2019 understanding, this paper would like to rethink the influence of reduction for this task. However, it is not clear that why three reduction methods the author used can help with understanding the difference between humans and DNNs because when training a DNN for image classification, we usually use these methods to augment our training set, but for humans, it is a totally different story that how to recognize an image.\\n\\nDNNs indeed often consider perturbations, transformations, reductions, etc., to augment the training set and increase robustness. Our method is then a general, information-theoretic measure of robustness based on the idea of computing how much information a classifier requires for accurate classifications. As such, it can be used (for example) to evaluate how such augmentations improve classification robustness. Our measure is independent of the form of classifier or training used; although humans and machines use different learning and classification paradigms, they can be compared in such a general framework.\\n\\n## For the theoretical demonstration, in this paper, the author uses approximating minimal-entropy to quantify the minimal content of an image DNNs or humans need to give correct category. The intuition of this method is suitable. But in section 3, the author didn\\u2019t give a clear demonstration of how to compute the entropy reduction in 3.1. I think if it is better to introduce how to measure 3.2 in detail, then 3.1 may be more clear. And it also makes me confused about the atomic reduction step in the last paragraph in 3.1. For the 3.5, I think the authors should focus on how to demonstrate MEPIs for humans more mathematically so that it will be more reliable.\\n\\nWe agree and have implemented all suggested changes in Section 3. In particular, we moved Section 3.2 before 3.1. We further rewrote the description of the atomic reduction step, and provided mathematical definitions for how MEPIs are computed for humans in Section 3.5 (this turned out not to be trivial and also required additional definitions in earlier sub-sections).\\n\\n## For the experiments, the author tries to answer two questions: 1. How does the entropy required by DNN and human classifiers compare? 2. How do the classifiers perform in terms of precision for each others\\u2019 MEPIs? However, the experiments do not provide convincing evidence to existing approaches. First of all, for a single DNN, how different entropy reduction methods influence the classification? Secondly, how different reduction scales in the same model influence the results? At last, the comparison between different models should give a more visualized figure to illuminate the difference. It will be better to provide more ablation study experiments for this paper.\\n\\nWe have explicitly added the questions raised by the reviewer as additional questions at the start of the experimental section. Regarding the visual comparison, we were unsure if the reviewer was referring to further plots of results (e.g., for ablation) or rather visual examples of images (MEPIs) for different models. If the reviewer could clarify, we would be happy to look into this for the next version of the paper.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"## Do the results in the paper change if a different entropy measure is used (e.g., JPEG compression)?\\n\\nThe individual MEPIs (minimal entropy positive images) are not sensitive to the measure of entropy used unless the image reductions (on resolution, colour or crop) increase entropy under the given measure (this does not happen in our setting). The Powell search method is concerned with incrementally reducing the entropy while keeping a correct classification towards a local minimum. A similar process applies for human MEPIs but in reverse. Thus the MEPIs will be the same for any measure that \\u201cagrees\\u201d that our reductions reduce entropy at each step. The entropy measure becomes important if there are distortions that may or may not increase entropy (which we explicitly do not consider at the moment), or for evaluating laconic classification across different types of reductions/classifiers/images (per Figure 1).\\n\\nWe ruled out lossy compression formats such as JPEG (they are also parameterised). Rather than using lossy compression as an entropy measure, such formats would be an interesting to explore as another form of entropy reduction to explore in our framework (varying the compression ratio).\\n\\nWe also considered \\u201cdirect\\u201d measures of entropy, but did not find any suitable for the scenario; for example, the \\u201cdelentropy\\u201d measure proposed by Larkin (2016), only considers grayscale images, and is outperformed compression-wise by PNG. \\n\\nOther lossless-compression options to explore might be GIF (largely superseded by PNG), Lossless-JPEG (not widely supported) or WebP-lossless (not widely supported).\\n\\n## As suggested by the authors, training networks to be robust to the \\\"laconic\\\" image perturbations could be an interesting direction. \\u2026\\n\\nIndeed this is our priority to continue this work.\\n\\n## \\u2026 it would be interesting to conduct the experiment also on a crowdsourcing platform such as Mechanical Turk \\u2026\\n\\nWe agree.\\n\\n## \\u2026 it would be good to measure how well the annotators perform on the unperturbed images and on a simple noise transformation (e.g., Gaussian noise).\\n\\nWe had initially included a Gaussian noise perturbation, but reducing image quality sometimes increased the entropy. We explicitly exclude such perturbations for the moment as they complicate the human MEPI search (they could, however, be considered by the framework, particularly for machine models).\\n\\n## It would be good to know how approximate the entropy measures in the paper are, e.g., to understand why humans perform worse in the \\\"combined\\\" perturbation setting.\\n\\nWithout a \\u201cstandard\\u201d established measure of entropy for multi-channel images, we have no baseline against which to measure the level of approximation. On the other hand, the issue of human performance in the combined setting is rather more practical. Starting from a single pixel, the human has several options, such as adding a row of pixels above or below, adding a column of pixels to the left or to the right, increasing the colour, or increasing the resolution. Given that this search space is very large (compared to, e.g., colour where the user advances with one button to improve colours), we also added an option to increase pixels in all directions, as well as to advance in all directions at once, in order to simplify the search in this case. Still the users evidently tend to \\u201covershoot\\u201d the MEPI by not knowing which option to select; e.g., increasing resolution slightly might greatly help to recognise the image, but the user, not realising this, may rather choose to continuously expand the crop, increasing the entropy of the MEPI in the \\u201cwrong direction\\u201d. We acknowledge this limitation in the paper and are unsure how this might be overcome for computing human MEPIs in this specific case (we also experimented with an automated improvement of the image, but ultimately found that this leads to larger MEPIs by taking away too much control from the user, ruling out this option). Still however, the human MEPIs in other cases (particularly colour and resolution) do not have this limitation, nor do the cross-model MEPI classification error results. \\n\\n## How did the results from the control group and the open / online evaluation differ?\\n\\nMost of the online users that were excluded from the results for being too far from the control were essentially users that tried one or two images, gave incorrect answers and gave up on the experiment. We provide mean results for the different user groups in Appendix D.\\n\\n## For the related work section ...\\n\\nWe thank the reviewer for these pointers. We agree about their relevance and have referenced them accordingly.\\n\\n## It could be helpful \\u2026 to see some example images of the different transformations in the main text.\\n\\nWe add an example in Appendix A (we could not find space to add them to the main text).\\n\\n\\nWe fixed the minor comments (thanks).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": [\"The paper proposes and studies a task where the goal is to classify an image that has been intentionally degraded to reduce information content - hence the name \\\"laconic\\\" image classification. The motivation for this task is to compare human and machine performance in a task that deviates from the standard ImageNet setup. To make different content reductions comparable, the authors measure the (approximate) entropy of an image via its PNG-compressed file size. As image transformations, the authors utilize quantization, downsampling, cropping, and a combination of the above. The authors find that convnets with higher accuracy are also more robust to these perturbations, and that humans perform well on the minimum-entropy examples of the networks (but not vice-versa).\", \"Overall I find the comparison of human and machine performance interesting and hence recommend accepting the paper. However, there are multiple directions that could possibly strengthen the core experiment. Hence I only give a weak accept at this point. Concretely, these directions are:\", \"Do the results in the paper change if a different entropy measure is used (e.g., JPEG compression)?\", \"As suggested by the authors, training networks to be robust to the \\\"laconic\\\" image perturbations could be an interesting direction. For instance, standard data augmentation with the proposed perturbations would be a relevant baseline.\", \"To aid replicability and to compare the performance of different human test subjects, it would be interesting to conduct the experiment also on a crowdsourcing platform such as Mechanical Turk (as a complement to, not a replacement for, the university population in the paper).\", \"Also to add replicability and to make it easier to compare different human accuracy evaluations, it would be good to measure how well the annotators perform on the unperturbed images and on a simple noise transformation (e.g., Gaussian noise).\", \"It would be good to know how approximate the entropy measures in the paper are, e.g., to understand why humans perform worse in the \\\"combined\\\" perturbation setting.\", \"How did the results from the control group and the open / online evaluation differ?\", \"In addition, I have the following suggestions for improving the paper:\", \"For the related work section, the authors may find the following papers on robustness of convnets to distortions interesting:\", \"Manitest: Are classifiers really invariant?\"], \"https\": [\"//arxiv.org/abs/1804.00499\", \"It could be helpful for the reader to see some example images of the different transformations in the main text.\", \"Section 6: \\\"[...] a bias in texture for images trained on the ILSVRC dataset [...]\\\" - should this be \\\"classifiers\\\" instead of \\\"images\\\"?\", \"Section 6: \\\"[...] would be interesting to explore in future\\\" - insert \\\"the\\\" before \\\"future\\\"?\"]}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to understand and compare the performance of DNNs classifier, which is different from the precise prediction in the notion of correct/wrong. With approximate minimal-entropy of input images, the classifiers can recognize this image so that different classifiers including human and DNNs will need different reduction methods (cropping, downsampling, color reduction )for the same image to give correct prediction and also will give us different performance in a same test dataset. By comparing the results with human\\u2019s and DNNs\\u2019, the author claims that it will have more challenges for DNNs in this laconic image classification task than human will have.\\nFor the motivation in this paper, the author tries to propose a new perspective to evaluate the robustness of image classifiers. Especially compared with humans\\u2019 understanding, this paper would like to rethink the influence of reduction for this task. However, it is not clear that why three reduction methods the author used can help with understanding the difference between humans and DNNs because when training a DNN for image classification, we usually use these methods to augment our training set, but for humans, it is a totally different story that how to recognize an image.\\nFor the theoretical demonstration, in this paper, the author uses approximating minimal-entropy to quantify the minimal content of an image DNNs or humans need to give correct category. The intuition of this method is suitable. But in section 3, the author didn\\u2019t give a clear demonstration of how to compute the entropy reduction in 3.1. I think if it is better to introduce how to measure 3.2 in detail, then 3.1 may be more clear. And it also makes me confused about the atomic reduction step in the last paragraph in 3.1. For the 3.5, I think the authors should focus on how to demonstrate MEPIs for humans more mathematically so that it will be more reliable.\\nFor the experiments, the author tries to answer two questions: 1. How does the entropy required by DNN and human classifiers compare?\\n2. How do the classifiers perform in terms of precision for each others\\u2019 MEPIs? However, the experiments do not provide convincing evidence to existing approaches. First of all, for a single DNN, how different entropy reduction methods influence the classification? Secondly, how different reduction scales in the same model influence the results? At last, the comparison between different models should give a more visualized figure to illuminate the difference. It will be better to provide more ablation study experiments for this paper.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nIn this empirical study, the authors attempt to identify a minimal entropy version of an image such that the image may be correctly classified by a human or computer. The authors then compare the efficacy of a human and computer to maintain accuracy in the presence of a reduced entropy representation of an image. The authors find that machines are more sensitive to reductions in entropy due to image resolution than humans (as opposed to color or cropping). In addition, the authors find that humans are generally better at identifying minimal entropy images than machines.\\n\\n1. Corruption results not surprising. \\n\\nAlthough the authors offer some intriguing methods, I found the results to not be compelling nor improve our understanding of the relative differences between human and machine perception. While identifying that humans are less sensitive to a reduction in resolution, this result is not terribly surprising given that networks are known to suffer from aliasing artifacts, e.g.\\n\\n Geodesics Of Learned Representations\\n Olivier J. Henaff & Eero P. Simoncelli\", \"https\": \"//arxiv.org/abs/1903.12261\\n\\nTo summarize, my feedback is the following:\\n\\n1. Please justify the use of entropy to quantify the distortion. What does entropy provide above and beyond just parameterizing the distortion (e.g. image resolution, color saturation)?\\n\\n2. Are there other results above-and-beyond sensitivity to image resolution that distinguishes human and machine in these experiments? These results seem to be largely known by just considering corruptions such as low pass filters, etc. presented in the above papers.\", \"minor_comments\": [\"The authors should provide a figure with example images in the main text showcasing how each method corrupts an image.\"]}"
]
} |
SyevYxHtDB | Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks | [
"Tribhuvanesh Orekondy",
"Bernt Schiele",
"Mario Fritz"
] | High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker's error rate up to a factor of 85$\times$ with minimal impact on the utility for benign users. | [
"model functionality stealing",
"adversarial machine learning"
] | Accept (Poster) | https://openreview.net/pdf?id=SyevYxHtDB | https://openreview.net/forum?id=SyevYxHtDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"SfHHKpONk-X",
"1wEHYgtPNG",
"B1gudgK3jB",
"B1eAb_gqiH",
"S1gADrHuir",
"H1lmmrBdsS",
"S1luN1BuoH",
"Byg7Ei4OiH",
"r1ef7Qn-jr",
"BJx6LZr0tB",
"SygLUkwTFr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1587914348499,
1576798749148,
1573847152019,
1573681158257,
1573569893928,
1573569818777,
1573568303987,
1573567274806,
1573139225962,
1571864917219,
1571807054511
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2438/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2438/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2438/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2438/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2438/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Code and Data\", \"comment\": \"The code and data is available here: https://resources.mpi-inf.mpg.de/d2/orekondy/predpoison/\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposed an optimization-based defense against model stealing attacks. A criticism of the paper is that the method is computationally expensive, and was not demonstrated on more complex problems like ImageNet. While this criticism is valid, other reviewers seem less concerned by this because the SOTA in this area is currently focused on smaller problems. After considering the rebuttal, there is enough reviewer support for this paper to be accepted.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Follow-up Response\", \"comment\": \"We are happy our initial response helped address some concerns of AR4.\\n\\nNow, we hope to address the follow-up concerns.\\n\\n\\n<< 1.b.i (a) attackers are fairly weak .. get mediocre results >>\\n- Stronger attackers would further motivate the model stealing threat. Model stealing is an active research topic (Table A2), and we find recent approaches demonstrate results to warrant a concern -- models require an immense amount of money and labour to develop, but can be substantially replicated (0.72-1.0$\\\\times$ in our experiments) via prediction APIs for 10s-100s of dollars.\\n\\n\\n<< 1.b.i (b) attackers require a lot of synthetic data >>\\n- The data is easy to obtain (e.g., random natural images) and comparable to victim\\u2019s training set size e.g., 1$\\\\times$ (for CIFAR10/CIFAR100), 2.1$\\\\times$ (Caltech256). Hence, we argue amount of data is not a limitation for attacks.\\n- [Revisions] We remark on sizes of attacker\\u2019s transfer set relative to victim\\u2019s training set in Sec. 5 \\u201cAttack Strategies\\u201d.\\n\\n\\n<< 1c. Suspect Imagenet results would be underwhelming \\u2026 attack methods seem very easy to adapt to new datasets \\u2026 would be nice to include this >>\\n- Extending methods to Imagenet would help clarify attack results. But, the experiments would indeed be intractable to perform in the given duration. We plan to tackle it in the future.\\n\\n\\n<< Caltech256 inference times a little concerning >>\\n- We agree and we have already started working on this problem. Our current version reduces computation on Caltech256 to 0.3s by estimating $G$ using a more efficient surrogate architecture (ResNet-34 vs. VGG16 from before).\\n- [Revisions] We indicate this in Appendix E.2. We will further discuss and update timing results.\"}",
"{\"title\": \"Reply to Reponse\", \"comment\": \"Thanks for the responses. Here are points that still concern me:\\n\\n1b.i) \\nFair enough. I wasn't so concerned about the attacker not being able to perform this many queries (although you mention some previous defense methods do try to curb this), more that this illustrates that attackers are fairly weak as they require a lot of synthetic training data to get mediocre results. \\n\\n1c) \\nI suspect attackers don't include ImageNet results because the results would be underwhelming. The attack methods seem very easy to adapt to new datasets, so it would have been nice to include this. I understand though that running ImageNet experiments in this short rebuttal period could be intractable. \\n\\n2) \\nThanks for including the time results. The Caltech256 are a little concerning though. The trend is that on more difficult datasets, the inference time scales very sharply with your defense (close to x200 for Caltech256). I imagine for practical datasets of interest, and a large number of queries, this could be a non-trivial bottleneck for anyone deploying this.\\n\\n3a,b,c) \\nI appreciate the clarification on this. \\n\\n4)\\nThanks for including this. It provides credence to the method you employ.\"}",
"{\"title\": \"(2/2) Response to Review #4\", \"comment\": \"<< (4) Angular histogram plot crafted without knowledge of model\\u2019s parameters .. would motivate the defense more >>\\n- We appreciate the suggestion. We find our defense similarly introduces angular deviations, even without knowledge of the attacker\\u2019s model parameters. \\n- [Revisions] We now indicate this in our discussion in the main manuscript. We provide further details in Appendix F.4. \\n\\n[1] Tram\\u00e8r, Florian, et al. \\\"Stealing machine learning models via prediction apis.\\\" USENIX 2016.\\n[2] Jagielski, Matthew, et al. \\\"High-Fidelity Extraction of Neural Network Models.\\\" arXiv 2019.\\n[3] Anonymous authors. \\u201cThieves on Sesame Street! Model Extraction of BERT-based APIs.\\u201d Under review at ICLR 2020.\"}",
"{\"title\": \"(1/2) Response to Review #4\", \"comment\": \"We thank all reviewers for their valuable feedback. We are glad that reviewers find our \\u201cwell-motivated\\u201d defense approach \\u201ceffective\\u201d against model stealing, \\u201csignificantly supplement existing defenses\\u201d, and further validated by \\u201cextensive experiments\\u201d. We are also pleased reviewers find the paper \\u201cwell-written\\u201d and \\u201cvery readable\\u201d.\\n\\nWe now address individual concerns of AnonReviewer4.\\n\\n<< (1a) MAD is comparable to random-noise on CIFAR100 \\u2026 Defense performance gap reduces as the dataset becomes more difficult >>\\n- In terms of the two utility metrics we use to evaluate defense performance:\\n (i) *utility = defender\\u2019s accuracy* (y-axis of Fig. 4 - bottom; blue and green lines): the performance gap is nonetheless significant e.g., CIFAR100 MAD defender accuracy is 5% higher than random-noise at attacker accuracy (x-axis) = 44%; and\\n (ii) *utility = perturbation amount* (y-axis of Fig. 4 - top): we consistently find MAD significantly outperforms random-noise defense, even for difficult datasets e.g., CIFAR100 MAD defender introduces 1/3$\\\\times$ lesser perturbation than random-noise at attacker accuracy = 44%.\\n- [Revisions] We introduced magnification insets of dense regions in Fig. 4 of the revised manuscript to better convey the gaps.\\n\\n\\n<< (1b) Overall skeptical of the threat model: (i) requires large number of queries \\u2026 (ii) attacker does not achieve great results >>\\n- We believe the skepticism is unjustified, because:\\n (i) *Querying is cheap*: The bottleneck for stealing models is the *cost* of executing queries (e.g., money, latency) rather than the *number* of queries. The cost for querying in practise is cheap: 0.0015 USD per prediction on Google cloud prediction API for instance. \\n (ii) *Attack performance/results extracted per dollar is substantial*: Yes, the accuracy of attacker\\u2019s stolen models are imperfect (0.72-1.0$\\\\times$ victim\\u2019s accuracy, Table 1). However, the dollars spent per accuracy point (\\u201cAP\\u201d) [1, 2, 3] for the attacker is a fraction of the victim\\u2019s. Consider Caltech256 for instance: (a) Victim = 11 USD per AP (22K images x 0.35 USD / 80 accuracy; 0.35 USD using Google\\u2019s data labeling platform) ignoring costs to collect and curate data, engineer the model, etc.; and (b) Attacker = 1 USD per AP (50K images x 0.0015 USD / 74.6 accuracy) by stealing. \\n- Consequently, we argue that model stealing attacks pose a severe threat when viewing the problem as information extracted by the attacker per dollar.\\n\\n\\n<< (1c) Attack and defense results on ImageNet would be nice >>\\n- While Imagenet evaluation would be interesting, the focus of our paper is defending existing attack models and on datasets that the attacks have proven to be effective. Unfortunately we are not aware of any existing model stealing attack on ImageNet, apart from a very recent arXiv paper [2] (Jagielski et al., Sep. 2019).\\n\\n\\n<< (2) How long does this optimization procedure take? \\u2026 unreasonable if it significantly lengthens the time to return outputs of queries >>\\n- We find all our optimization procedures take under a second. Specifically: 6ms (MNIST), 7ms (FashionMNIST), 9ms (CIFAR10), 69ms (CIFAR100), 0.4s (CUB200), 0.8s (Caltech256). \\n- [Revisions] We added a discussion on run-times in Appendix E.2.\\n\\n\\n<< (3a) Would be nice if attacks were explained a bit more. Specifically, how are attacks tested? >>\\n- Thanks for the suggestion. The attacks are evaluated on a common held-out victim test set. For a fair head-to-head comparison, the test set during evaluation is common to both the victim\\u2019s model and attacker\\u2019s stolen model.\\n- [Revisions] Attacks are further clarified in the revised manuscript by (i) a visualization of attacker, defender and evaluation metrics in Appendix Figure A1; and (ii) extending our existing discussion on attack model details in Appendix D.\\n\\n\\n<< (3b) Does the attacker have knowledge about the class-label space of the victim? >>\\n- All attackers are aware of the *number* of output classes of the victim model. As for the *semantics* of output class labels: (i) {knockoff}: does not require this knowledge; and (ii) {jbda, jb-self, jb-top3}: has the knowledge.\\n- [Revisions] We clarified this in our existing discussion on attack models in Appendix E.\\n\\n\\n<< (3c) If the attacker is trained with some synthetic data/other dataset, do you then freeze the feature extractor and train a linear layer to validate on the victim\\u2019s test set? >>\\n- No. We evaluate the attacker\\u2019s stolen model as-is on the victim\\u2019s test set. To further elaborate, we: (a) construct a transfer dataset (image-posterior pairs, where image = synthetic/other) using the attack strategies; (b) train the attack model $F_A$ using the transfer set; and (c) evaluate $F_A$ on the victim\\u2019s test set. Note that (a) and (b) are intertwined for some attacks.\\n- [Revisions] We updated our manuscript to clarify this in Sec. 5 \\u201cEffectiveness of Attacks\\u201d and provided additional details in Appendix D \\u201cEvaluating attacks.''\"}",
"{\"title\": \"Response to Review #1\", \"comment\": [\"We thank all reviewers for their valuable feedback. We are glad that reviewers find our \\u201cwell-motivated\\u201d defense approach \\u201ceffective\\u201d against model stealing, \\u201csignificantly supplement existing defenses\\u201d, and further validated by \\u201cextensive experiments\\u201d. We are also pleased reviewers find the paper \\u201cwell-written\\u201d and \\u201cvery readable\\u201d.\", \"We now address individual concerns of AnonReviewer1.\", \"<< (1a) Problem (4) relies on the transfer set, where $x \\\\sim P_A(x)$, right? >>\", \"Yes. More precisely, problem (4) relies on a single input $x$ (queried by the adversary, sampled from an unknown $P_A(x)$).\", \"<< (1b) Do utility and non-replicability have the same $D^{test}$? \\u2026 How to determine $D^{test}$ for F_A? >>\", \"Yes. For evaluation purposes we use the same test set accuracies (of the corresponding dataset e.g., MNIST) to evaluate both the defended victim model ($F^{\\\\delta}_V$) and attacker model ($F_A$). The setting allows for a fair head-to-head comparison of both models on a common test set.\", \"[Revisions] We added an illustration (Fig. A1 of the appendix) and additional discussion (Appendix D \\u201cEvaluating attacks\\u201d) to better clarify this.\", \"<< (1c) Utility constraint of MAD-argmax missing in (4) \\u2026 suggest adding it to (4) >>\", \"Thanks for pointing this typo out. (4-7) previously presented our optimization problem for approach \\u201cMAD\\u201d.\", \"[Revisions] Our revised version also presents the additional constraint for variant \\u201cMAD-argmax\\u201d in Eq. (8) .\", \"<< (1d) Writing clarifications: (i) Better clarify that both attacker and defender are knowledge limited \\u2026 (ii) highlight problem (4) is a black-box optimization problem for defense \\u2026 (iii) Table to summarize notation >>\", \"Thanks for the suggestions.\", \"[Revisions] The revised manuscript addresses all the above: (i) is clarified in Sec. 3 paragraphs \\u201cKnowledge-limited Attacker\\u201d and \\u201cDefender\\u2019s Assumptions\\u201d accordingly; (ii) is highlighted immediately after presenting problem (4); and (iii) We summarized the notation in Table A1 of the appendix.\", \"<< (2) Details of the heuristic solver are unclear ... Although the authors pointed out the pseudocode in the appendix, it lacks detailed analysis. >>\", \"We apologize for not making this clear.\", \"[Revisions] We revised our paragraph \\u201cHeuristic Solver\\u201d in Sec. 4 of the manuscript and further elaborated on it in Appendix C.\", \"<< (3) In estimating $G$, how to select the surrogate model? Results on different choices of architectures >>\", \"We select the surrogate model based on empirical observations for choices of:\", \"(a) surrogate architectures: which has a negligible effect and is robust to choices of attacker architectures (Fig. A2); and\", \"(b) initialization of surrogate model: which plays a crucial role. Initializing the weights of the surrogate far from convergence (Fig. A3) provides a better gradient signal to poison the posteriors. Consequently, we choose a randomly-initialized model to estimate $G$.\", \"[Revisions] We further clarify this in Section 4 (under \\u201cEstimating $G$\\u201d) of the revised manuscript and provide additional details (including results for choices of architectures and initializations) in Section E.1.\"]}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We thank all reviewers for their valuable feedback. We are glad that reviewers find our \\u201cwell-motivated\\u201d defense approach \\u201ceffective\\u201d against model stealing, which \\u201csignificantly supplement existing defenses\\u201d, and is further validated by \\u201cextensive experiments\\u201d. We are also pleased reviewers find the paper \\u201cwell-written\\u201d and \\u201cvery readable\\u201d.\\n\\nWe appreciate AR3 for recognizing that our defenses \\u201cadvances the research field\\u201d, especially due to a lack of an effective defense. While stronger theoretical results would certainly aid this line of study, our primary focus in this paper was establishing the first effective defense against a range of existing attack models and settings. We would be happy to answer any further questions.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a new method for defending against stealing attacks.\", \"positives\": \"1) The paper was very readable and clear.\\n2) The proposed method is straightforward and well motivated.\\n3) The authors included a good amount of experimental results.\", \"concerns\": \"1) You note that the random perturbation to the outputs performs poorly compared to your method, but this performance gap seems to decrease as the dataset becomes more difficult (i.e. CIFAR100). I\\u2019m concerned that this may indicate that the attackers are generally weak and this threat model may not be very serious. Overall, I\\u2019m skeptical of this threat model - the attackers require a very large number of queries, and don\\u2019t achieve great results on difficult datasets. Including results on a dataset like ImageNet would be nice. \\n2) How long does this optimization procedure take? It seems possibly unreasonable for the victim to implement this defense if it significantly lengthens the time to return outputs of queries. \\n3) Although this is a defense paper, it would be nice if the attacks were explained a bit more. Specifically, how are these attacks tested? You use the validation set, but does the attacker have knowledge about the class-label space of the victim? If the attacker trained with some synthetic data/other dataset, do you then freeze the feature extractor and train a linear layer to validate on the victim\\u2019s test set? It seems like this is discussed in the context of the victim in the \\u201cAttack Models\\u201d subsection, but it\\u2019s unclear what\\u2019s happening with the attacker. \\n4) It would be nice to see an angular histogram plot for a model where the perturbed labels were not crafted with knowledge of this model\\u2019s parameters - i.e. transfer the proposed defense to a blackbox attacker and produce this same plot. This would motivate the defense more.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed an effective defense against model stealing attacks.\", \"merits\": \"1) In general, this paper is well written and easy to follow.\\n2) The approach is a significant supplement to existing defense against model stealing attacks. \\n3) Extensive experiments. \\n\\nHowever, I still have concerns about the current version. \\nI will possibly adjust my score based on the authors' response. \\n\\n1) In the model stealing setting, attacker and defender are seemingly knowledge limited. This should be clarified better in Sec. 3. It is important to highlight that the defender has no access to F_A, thus problem (4) is a black-box optimization problem for defense. Also, it is better to have a table to summarize the notations.\", \"additional_questions_on_problem_formulation\": \"a) Problem (4) only relies on the transfer set, where $x \\\\sim P_A(x)$, right? \\nb) For evaluation metrics, utility and non-replicability, do they have the same D^{test}? How to determine them, in particularly for F_A? \\nc) One utility constraint is missing in problem (4). I noticed that it was mentioned in MAD-argmax, however, I suggest to add it to the formulation (4).\\n\\n2) The details of heuristic solver are unclear. Although the authors pointed out the pseudocode in the appendix, it lacks detailed analysis. \\n\\n3) In Estimating G, how to select the surrogate model? Moreover, in the experiment, the authors mentioned that defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model. If possible, could the author provide results on different architecture choices for the stolen model as well as the surrogate model?\\n\\n############## Post-feedback ################\\nI am satisfied with the authors' response. Thus, I would like to keep my positive comments on this paper. Although the paper is between 6 and 8, I finally decide to increase my score to 8 due to its novelty in formulation and extensive experiments.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper aims at defending against model stealing attacks by perturbing the posterior prediction of a protected DNN with a balanced goal of maintaining accuracy and maximizing misleading gradient deviation. The maximizing angular deviation formulation makes sense and seemingly correct. The heuristic solver toward this objective is shown to be relatively effective in the experiments. While the theoretical novelty of the method is limited, the application in adversarial settings may be useful to advance of this research field, especially when it is relatively easy to apply by practitioners.I recommend toward acceptance of this paper even though can be convinced otherwise by better field experts.\"}"
]
} |
SJxDKerKDS | Reinforcement Learning with Structured Hierarchical Grammar Representations of Actions | [
"Petros Christodoulou",
"Robert Lange",
"Ali Shafti",
"A. Aldo Faisal"
] | From a young age humans learn to use grammatical principles to hierarchically combine words into sentences. Action grammars is the parallel idea; that there is an underlying set of rules (a "grammar") that govern how we hierarchically combine actions to form new, more complex actions. We introduce the Action Grammar Reinforcement Learning (AG-RL) framework which leverages the concept of action grammars to consistently improve the sample efficiency of Reinforcement Learning agents. AG-RL works by using a grammar inference algorithm to infer the “action grammar" of an agent midway through training, leading to a higher-level action representation. The agent's action space is then augmented with macro-actions identified by the grammar. We apply this framework to Double Deep Q-Learning (AG-DDQN) and a discrete action version of Soft Actor-Critic (AG-SAC) and find that it improves performance in 8 out of 8 tested Atari games (median +31%, max +668%) and 19 out of 20 tested Atari games (median +96%, maximum +3,756%) respectively without substantive hyperparameter tuning. We also show that AG-SAC beats the model-free state-of-the-art for sample efficiency in 17 out of the 20 tested Atari games (median +62%, maximum +13,140%), again without substantive hyperparameter tuning. | [
"Hierarchical Reinforcement Learning",
"Action Representations",
"Macro-Actions",
"Action Grammars"
] | Reject | https://openreview.net/pdf?id=SJxDKerKDS | https://openreview.net/forum?id=SJxDKerKDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YEVqGDsLM0",
"r1lP_es5iH",
"HyecePWciH",
"Bylk4IZ5sH",
"r1lHABZ5jS",
"rJeFXXNJcB",
"r1x8yFBCtS",
"BygCV9NAYS",
"ryxS79KatH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798749119,
1573724271386,
1573684977569,
1573684775494,
1573684684697,
1571926817510,
1571866845877,
1571863093981,
1571818013422
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2437/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2437/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2437/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2437/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2437/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2437/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2437/AnonReviewer1"
],
[
"~Christopher_Leonard1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The topic of macro-actions/hierarchical RL is an important one and the perspective this paper takes on this topic by drawing parallels with action grammars is intriguing. However, some more work is needed to properly evaluate the significance. In particular, a better evaluation of the strengths and weaknesses of the method would improve this paper a lot.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \">The k-Sequitur algorithm runs in linear time in the length of the presented action sequence. Hence, in computational terms it is easily feasible. Furthermore, the entropy regularisation deployed in the technique makes it more than a greedy compression technique.\\n\\nI think it's fairly clear that k-Sequitur does more than greedy compression, however my point was that I don't see a discussion about what this additional complexity buys to the policy learning process, and what the tradeoffs are of using Sequitur rather than - say - greedy search.\\n\\n\\n>Instead, the main point that we want to raise is that the grammatical inference procedure obtains a hierarchical representation of actions. A key advantage of this symbolic procedure is the interpretability of such representations. For now, we leave this for future work.\\n\\nRight, but if this \\\"key advantage\\\" is not exploited (as far as I can see), then it is not an advantage at all, at least wrt this particular publication.\", \"think_about_this_issue_from_the_perspective_of_someone_that_needs_to_build_on_your_work\": \"what is the \\\"simplest\\\" combination - of the ideas you have introduced - that shows the properties you have demonstrated through this method? What is the _scientific knowledge_ that one gains from reading your paper?\\n\\n\\n>the simple moving average based heuristic has sufficed and reduces the complexity of the proposed algorithm. \\n\\nI think even just the fact that learning termination functions is a common HRL problem tells me that it is fundamentally important to deal with multi-stage policies, and it's unwise to present \\\"abandon ship\\\" without comparing it to previous work in the area.\\n\\nHowever, ultimately my main concern is that the heuristic is just that, a heuristic: it's bound to have corner cases and fail to generalise to interesting settings, and a proper evaluation of the system would include a discussion on failure cases and unexpected behaviour, which I don't really see in the manuscript?\\n\\n\\n>We agree and have updated the manuscript to include a more detailed literature review, see section 2 of the revised paper.\\n\\nThank you for that, it looks better.\\n\\n\\n>Yes, we agree. It is easier to infer effective macro-actions based on already successful on-policy rollouts.\\n\\nWould it be possible to add any experiment / analysis showing the degree of how much this matters?\\n\\n\\n>And again, the agents do experience a significant speed up in learning after the first grammar is inferred (see figure 4, performance after 100,000 transitions). \\n\\nRight, but *why* is that the case? Does it mean that the policies are just facilitated in exploration? Do the initial few macros still retain usefulness towards the end of the training stage? What is the evolution of the distribution in terms of action usage across these tasks?\\n\\nSample complexity is a poor way of analysing this sort of methods, since it's difficult to disentangle behaviour caused by task settings rather than properties of the methods, so the analysis would be better if it were to be augmented with some qualitative, method-specific, data.\"}",
"{\"title\": \"Rebuttal with brief description of revised submission\", \"comment\": \"Dear reviewer 1,\\n\\nWe are very thankful for your comments and believe that multiple issues of importance are being raised.\", \"regarding_point_1\": \"The k-Sequitur algorithm runs in linear time in the length of the presented action sequence. Hence, in computational terms it is easily feasible. Furthermore, the entropy regularisation deployed in the technique makes it more than a greedy compression technique. Instead, the main point that we want to raise is that the grammatical inference procedure obtains a hierarchical representation of actions. A key advantage of this symbolic procedure is the interpretability of such representations. For now, we leave this for future work.\", \"regarding_point_2\": \"The relationship between abandon ship and termination policies is a very interesting observation. We have not attempted to learn the termination in an end-to-end fashion. Our current understanding is that this poses significant challenges to options (see concurrent work by Harutyunyan et al., 2019 https://arxiv.org/pdf/1902.09996.pdf) and it is not entirely trivial how to combat this additional non-stationary component. For now, the simple moving average based heuristic has sufficed and reduces the complexity of the proposed algorithm.\", \"regarding_point_3\": \"We agree and have updated the manuscript to include a more detailed literature review, see section 2 of the revised paper.\", \"regarding_point_4\": \"Yes, we agree. It is easier to infer effective macro-actions based on already successful on-policy rollouts. We want to highlight that this provides a potential future research direction, i.e. skill distillation/imitation learning via action grammar inference. Furthermore and to address your point, the results of the Action Grammar SAC agents are obtained without pre-training. And again, the agents do experience a significant speed up in learning after the first grammar is inferred (see figure 4, performance after 100,000 transitions). Finally, as already stated we have experimented with a tabular example in Towers of Hanoi where grammar macro-actions are also without pre-training - see new appendix item F.\\n\\nBest wishes,\\nThe authors.\"}",
"{\"title\": \"Rebuttal with brief description of revised submission\", \"comment\": \"Dear reviewer 3,\\n\\nWe are very delighted and thankful for your assessment. \\n\\nWe do agree that a detailed comparison with traditional HRL algorithms may be useful. During the development of this work we found it very challenging to do so under fair circumstances. Both Feudal Networks as well as h-DQNs require significant amounts of user-defined specifications/hyperparameters (such as sub-goals and hierarchy definition) and often may not be trained in a fully end-to-end fashion. Therefore, we decided to focus on an \\u201cablation\\u201d comparison with DDQN and SAC with frame-skipping (i.e. the \\u201cnaive\\u201d grammar of primitive actions that correspond to length 4 macro-actions).\", \"regarding_the_use_of_macro_actions_to_improve_sample_efficiency\": \"The baseline comparison as well as ablation studies try to address these issues and provide more insights. Could you be so kind as to clarify which aspects exactly remain unclear?\\n\\nFinally, yes, we do have preliminary results for a sparse rewards environment, namely for 5-disk Towers of Hanoi (see newly added appendix item F). The agent only receives a positive reward when achieving the final state. The results so far are only for the tabular case and without HAR or \\u201cAbandon Ship\\u201d. In our experience, the grammar macros not only propagate value information further back into the past, but also allow the agent to explore parts of the state space more efficiently. We also believe that refining value estimates & efficient exploration are by no means orthogonal to each other. From the figure it also becomes apparent that the agent is able to amplify initial successful trajectories by encoding the action sequences in a grammar. Thereby, an action grammar provides an action representation & an effective form of memory.\\n\\nBest wishes & thank you for your time,\\nThe authors.\"}",
"{\"title\": \"Rebuttal with brief description of revised submission\", \"comment\": \"Dear reviewer 2,\\n\\nThank you very much for your time, consideration and detailed review. \\nWe apologize for any writing errors and have corrected the mentioned mistakes (see updated submission document). We fully agree that the HRL sub-field of maco-actions dates back a lot longer than the literature cited in this submission. We have now revised the paper to address this; see section 2 with literature comparison. Here is a small excerpt from the new addition:\\n\\n\\u201c[...]Identification of suitable low level sub-policies poses a key challenge to HRL.\", \"current_approaches_can_be_grouped_into_three_main_pillars\": \"Graph theoretic (Hengst et al., 2002; Mannor et al., 2004; Simsek et al., 2004) and visitation-based (Stolle et al. 2002) approaches aim to identify bottlenecks within the state space. Bottlenecks are regions in the state space which characterize successful trajectories. This work, on the other hand, identifies patterns solely in the action space and does not rely on reward-less exploration of the state space. Furthermore, the proposed action grammar framework defines a set of macro-actions as opposed to full option-specific sub-policies. Thereby, it is less expressive but more sample-efficient to infer.\\nGradient-based approaches, on the other hand, discover parametrized temporally-extended actions by iteratively optimizing an objective function such as the estimated expected value of the log likelihood with respect to the latent variables in a probabilistic setting (Daniel et al., 2016) or the expected cumulative reward in a policy gradient context (Bacon et al., 2017; Smith et al., 2018). Grammar induction, on the other hand, infers patterns without supervision solely based on a compression objective. The resulting parse tree provides an interpretable structure for the distilled skill set.\\nFurthermore, recent approaches (Vezhnevets et al., 2017; Florensa et al., 2017) attempt to split the goal declaration and goal achievement across different stages and layers of the learned architecture. Usually, the top level of the hierarchy specifies goals in the environment while the lower levels have to achieve such. Again, such architectures lack sample efficiency and easy interpretation. The context-free grammar-based approach, on the other hand, is a symbolic method that requires few rollout traces and generalizes to more difficult task-settings. . [...]\\u201d\\n\\nThe reviewer brings up the concern that the inferred grammar is crudely flattened into a straight hierarchy. Thereby, the notion of production rules & sub-policies are lost. We have a different view on this: Firstly, all of the production rules may easily be recovered and identified during execution time. Thereby, the interpretation of a grammar-inferred rule of temporally-extended actions does not get lost. Furthermore, as reviewer 3 has highlighted, a deep hierarchy of policies is not required in order to obtain an effective action space of temporally-extended skills. We also want to highlight the additional novel introduction of \\u201cHindsight Action Replay\\u201d which we believe to be of general interest to the HRL community of its own merit. \\n\\nAll in all we hope to have addressed some of the productive comments and will attempt to address any further concerns and questions in future work. We thank the reviewer for all their input and advice, and hope that the body of follow-up work is going to come closer to our aspirations. \\n\\nBest wishes and again thank you for your time,\\nThe authors.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes the use of macro (i.e. aggregated) actions to address reinforcement learning tasks. The inspiration presented is from hierarchical grammar representations, and the method is tested on a subset of Atari games. The paper is overall well written, although many paragraph demonstrate a level of polish inadequate for a top level submission (repetitions, typos, etc.).\\nThe main idea pursued in the work is extremely interesting and with likely important implications to recent DRL. The concept though is far from new: a quick search for \\\"macro action reinforcement learning\\\" points to a NIPS '99 paper from J. Randlov, though on top of my mind there should be even older work on the topic.\\nThe perspective proposed of considering macro actions as atoms in a grammar is certainly intriguing, but the work proposed does not develop the concept. The macro actions are identified as patterns in action sequences, then built in straight hierarchies, without any distinction in type of atoms nor any rule to effectively make up a grammar.\\nThe related work section is extremely lacking, with no work older than 2016. The introduction presents more background, marginally older than that (up to 2012), when grammars make for an entire field of study with decades of history.\\nThe process is interesting and incorporates plenty of useful experience, which I would personally be glad to see published, although in the current context is insufficient as stand-alone contribution.\\nOn a more personal note, I suggest the authors not to get discouraged, as I strongly believe such an avenue of research is worthy investigating. A few research questions which I think should be asked are:\\n- Are the agents actually learning to play the game? Just render the game with one of your best players. For example, achieving a score of 360 on Qbert barely takes constant down input, and the fact that comparable scores have been published before is of no support.\\n- Are long action sequences always useful? For Qbert for example an average move length of 8 learned from an initial, untrained policy, is sufficient to get off the screen consistently. While the Abandon Ship protocol can mitigate this, the RL exploration phase is done by random action selection (consider explicit exploration instead), and the action space grows fast from the small initial 6 actions with the addition of all the macro actions, possibly limiting the exploration capability and biasing towards the use of longer macro actions even when sub-optimal.\\n- Mitigate the claims. I would love to \\\"eventually help make RL a universally practical and useful tool in modern society\\\", but unfortunately I think no single contribution can today make such a claim.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper introduced a way to combine actions into meta-actions through action grammar. The authors trained agents that executes both primitive actions and meta-actions, resulting in better performance on Atari games. Specifically, meta-actions are generated after a period of training from collected greedy action sequences by finding repeated sub-sequences of actions. Several tricks are used to speed up learning and to make the framework more flexible. The most effective one is HAR (hindsight action replay), without which the agent's performance reduces to that of the baseline.\\n\\nOverall, this paper could be a great contribution for the following reasons: \\n1. The paper is well written, with clear performance advantages over the baseline. \\n2. The paper provides a different perspective for HRL research, namely that we might not need to have a hierarchical policy to benefit from hierarchical actions that spans over many timesteps. \\n3. From this paper's ablation study for HAR, it seems to suggest that even with similar experiences, one can get better performance by substituting actions with temporally abstracted actions, propagating value function errors further back in time. If so, this work can serve as a novel counterexample to the claim made in Nachum et al., 2019.\", \"the_authors_may_want_to_address_the_following\": \"1. They may want to compare and contrast to other works in HRL that also does temporally abstracted actions. e.g. h-DQN, Feudal networks. Or even to repeating the same action N times-- a simple trick commonly used in Atari -- which can be seen as a very naive form of action grammar.\\n2. The main claim that having Action Grammar improves sample efficiency is not proved clearly. Apart from the ablation study, it's not immediately clear whether sticking to sub-sequence of actions are inherently beneficial for exploration, or that the agent somehow learned faster with the same set of samples collected.\\n3. It seems that the algorithm may be the most effective in areas where a baseline algorithm can learn to perform at least some meaningful action sequences already. Otherwise the Action Grammar may not extract meaningful subsequences. Has the algorithm been tried on sparse-reward games?\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method for learning macro-actions in a multi-step manner, where Sequitur, a grammar calculator, is leveraged together with an entropy-minimisation based strategy to find relevant macro-actions. The authors propose a system to bootstrap the weights of these macro-actions when increasing the policy's action space, and a system to increase the amount of data (and bias it towards macro-actions) used to learn a policy for when conditioned on this increased action-space. The authors test against a subset of the Arcade Learning Environment suite.\\n\\nOverall, I'm conflicted by this paper. On one hand, the framework is interesting, and their method involves the usage and exploration of quite a few nice ideas; on the other hand, (a) the quality of the scientific contribution is hard to judge considering the significant differences between the proposed baselines and and their methods, and (b) the experimental section doesn't provide a lot of qualitative analysis and signal wrt. each component.\\n\\nFurthermore, I have the following issues / questions:\\n\\n1. I'm not convinced that the usage of Sequitur to build the macro-actions is sufficient to declare this work novel wrt. other macro-action papers. Sequitur usage in this case seems to be particularly overkill, since ultimately all that the method seems to be doing is finding frequent sequences of actions, which can be done quite fast (at least given the amount of training steps) simply using search and pattern matching. From my point of view, there doesn't seem to be a lot in that work that exploits the fact that the macro-actions are constructed as a \\\"grammar\\\" (beyond, maybe, HAR)\\n\\n2. The Abandon Ship heuristics is effectively a fixed termination policy, which makes the entire setup somewhat similar to options. In this case, what is traded is learning complexity for a hyperparameter and a significant restriction in how the macro-actions terminate. Did you attempt to learn this function at all? Do you have any insights / experiments that might show how the heuristics behaves with changing values of $z$? Would it be possible to plot the distribution of attempted vs executed move lengths rather than then averages (since I doubt they would be normally distributed)?\\n\\n3. Given points 1 and 2, the literature review is lacking - there's a lot of prior work done on macro-actions in both RL and robotics (planning, HRI, ...) that goes well beyond the few recent papers mentioned by the authors, and I think it might be necessary to mention work on options where the termination function is structured / biased in some way.\\n\\n4. I have some doubt the experimental setup for DDQN fairly gives a fair assessment of the method. When using a pretrained features, the problem becomes significantly easier, and thus AG-DDQN potentially doesn't need to deal with the problem of learning extremely bad / noisy macro-actions. I would love to see the method trained for a more reasonable amount of frames without pre-training. Also, did the 8 / 20 atari games get chosen randomly, or were they picked based on some environment features?\\n\\n5. How do the Q-values for the policy evolve with training time? The proposed methods seem to somewhat imply that the action space grows unboundedly, which might seriously destroy the policy for tasks that require much longer training. Would it be possible to add a paragraph about how the policies evolve in at least some of these environments? Are macro-actions used most of the times after some full iterations? How many <learning -> action distillation> iterations are actually done in the current experiments?\\n\\nAt this point, I cannot recommend the acceptance of this work, however I'd be willing to reconsider my rating if the authors address the above points.\"}",
"{\"comment\": \"Dear Authors,\\n\\n\\nI have thoroughly read through the paper. It is quite interesting. I have a few questions regarding the feasibility of the proposed method. \\n\\nFirst, according to the ablation study presented in Fig. 5, it seems that only HAR brings impact on the curves. The other methods presented in Fig. 5, in contrast, do not seem to provide significant improvements (e.g., action balanced replay buffer, abandon ship, transfer learning, etc.). This ablation study seems to reveal that the action balanced replay buffer, abandon ship, and transfer learning approaches discussed in the paper do not actually affect the performance. I am wondering if the authors could provide stronger experimental results and more detailed explanation to justify the necessity of these approaches?\\n\\nAccording to the paper, the proposed method only presents results for 0.3M. For most contemporary DRL papers in the literature, the training procedure is typically performed for 10M or above, while 0.3M seems to be relatively short. Please note that 0.3M time steps of training can not sufficiently represent the capability of a training method. For many cases, learning curves rise after 1M or even 5M time steps (e.g., http://htmlpreview.github.io/?https://github.com/openai/baselines/blob/master/benchmarks_atari10M.htm). For a fair comparison with the existing contemporary DRL approaches, I suggest the authors to extend the experimental results to 10M, which is more appropriate.\\n\\nThe third questions is regarding the action space. Based on the statements presented in the paper, it seems that the action space of the agent grows with time (i.e., more and more macro actions are added to the action space of the agent.). With a huge action space containing only a constant number of primitive actions, it seems that the agent has a higher chance to select macro actions instead of its primitive actions. I am wondering if the authors could provide the frequency of the macro actions used by the policy (after training)? In addition, as the action space grows over time, the training difficulty also increases accordingly, indicating that the learning curves may become harder and harder to rise. This is the other reason why I would like to request the authors to provide the training curves for up to 10M time steps to justify the effectiveness of the proposed methodology. Moreover, it would be more appropriate to show the growing trend of the action space as well as the final size of it. \\n\\nIt would be nice if the authors could address my concerns regarding the proposed approaches and experimental results presented in this paper. \\n\\nThank you very much.\\n\\n\\nBest regards,\\nChristopher\", \"title\": \"Concerns regarding the feasibility of the proposed method\"}"
]
} |
r1lIKlSYvH | The Usual Suspects? Reassessing Blame for VAE Posterior Collapse | [
"Bin Dai",
"Ziyu Wang",
"David Wipf"
] | In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated. | [
"variational autoencoder",
"posterior collapse"
] | Reject | https://openreview.net/pdf?id=r1lIKlSYvH | https://openreview.net/forum?id=r1lIKlSYvH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"VqyyNWdNL",
"ByeTIu0jsH",
"Byg76w0soB",
"Hke8eqHFsr",
"SklOYFSKsr",
"rye_eErYir",
"rJx4TQStoH",
"r1l0LxStjr",
"Hye7WxStjH",
"S1gXy3rXcB",
"rJg6KrUpKr",
"S1xJXhf6tr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749089,
1573804116813,
1573803963514,
1573636590227,
1573636480326,
1573635056088,
1573635003717,
1573634134091,
1573634043106,
1572195291121,
1571804549225,
1571789846978
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2436/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2436/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2436/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This manuscript investigates the posterior collapse in variational autoencoders and seeks to provide some explanations from the phenomenon. The primary contribution is to propose some previously understudied explanations for the posterior collapse that results from the optimization landscape of the log-likelihood portion of the ELBO.\\n\\nThe reviewers and AC agree that the problem studied is timely and interesting, and closely related to a variety of recent work investigating the landscape properties of variational autoencoders and other generative models. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the technical difficulty and importance of the results. In reviews and discussion, the reviewers noted issues with clarity of the presentation and sufficient justification of the results. There were also concerns about novelty. In the opinion of the AC, the manuscript in its current state is borderline, and should ideally be improved in terms of clarity of the discussion, and some more investigation of the insights that result from the analysis.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Upload of revised version\", \"comment\": \"We have recently become aware that no paper revisions can be uploaded after November 15. Therefore we are now uploading a new version of our paper that addresses many of the reviewer comments. Among other things, we have clarified Section 5 and significantly expanded Section 7. Beyond this, we have also included new empirical results involving KL warm-start in Section 6, Figure 1. These results exactly conform with expectations per the analysis in our paper. Of course any additional comments/suggestions by reviewers are greatly appreciated, and can be incorporated into a later update once available.\"}",
"{\"title\": \"Upload of revised version\", \"comment\": \"We have recently become aware that no paper revisions can be uploaded after November 15. Therefore we are now uploading a new version of our paper that addresses many of the reviewer comments. Among other things, we have clarified Section 5 and significantly expanded Section 7. Beyond this, we have also included new empirical results involving KL warm-start in Section 6, Figure 1. These results exactly conform with expectations per the analysis in our paper. Of course any additional comments/suggestions by reviewers are greatly appreciated, and can be incorporated into a later update once available.\"}",
"{\"title\": \"Response to Reviewer 3 comments (Part II)\", \"comment\": \"Question 4g): \\\"It is possible that the VAE has a significantly worse loss landscape than the autoencoder initially and so warm-start may enable the VAE to escape this difficult initial region.\\\"\", \"response\": \"As we have argued and demonstrated via experiments, deep AEs can have a bad loss landscape with local minima exhibiting high reconstruction error. Using KL warm-start makes the VAE optimization trajectory actually more similar to that of an AE near the initialization. And as the KL annealing parameter changes, the trajectories are merely pushed towards collapsed solutions which will also have high reconstruction error. Regardless, we have never empirically found a single instance where a VAE with KL warm-start leads to better reconstruction error than the corresponding AE.\\n\\n---------------------------\", \"other_minor_points\": \"Regarding the list of typos and other minor issues, we are extremely appreciative. While it is obviously tedious for a reviewer to include such details, it is of course invaluable in clarifying and polishing a revision.\\n\\n---------------------------\"}",
"{\"title\": \"Response to Reviewer 3 comments (Part I)\", \"comment\": \"Thanks for the constructive comments pertaining to our submission. Please let us know if there are any unresolved concerns.\\n\\n*** Response to Reviewer 3 comments ***\\n---------------------------\\n\\nQuestion 1): \\\"One source of confusion for me was the difference between sections (ii) and (v) --- in particular I believe that (ii) and (v) are not mutually exclusive.\\\"\", \"response\": \"This is correct.\\n\\n---------------------------\"}",
"{\"title\": \"Response to Reviewer 2 comments (Part II)\", \"comment\": \"Question e.: \\\"... So that is the message that the paper is trying to convey here?\\\"\", \"response\": \"The reviewer's summary is partially correct; however, there is a bit more nuance involved in the full story. In particular, a key thread of the paper can be enumerated as follows:\\n\\n(1) Deeper AE architectures are essential for modeling high-fidelity images or similar.\\n(2) But counter-intuitively, increasing the depth of AE models can actually produce worse reconstruction errors even on the training data because of bad local minima.\\n(3) This then implies that the analogous VAE model may also have worse reconstruction error because the added KL regularization is likely to compound the problem, i.e., regularization usually just makes the reconstructions even worse as we have empirically verified.\\n(4) At any such bad minima, the value of gamma will necessarily be large, i.e., if it is not large, we cannot be at a local minimum.\\n(5) Whenever gamma is sufficiently large, the VAE will provably exhibit full/exact posterior collapse.\\n(6) Forcing gamma to be small does not fix this problem, since in some sense the \\\"implicit\\\" gamma is still large.\\n(7) Avoiding this overall scenario requires the development of better AE architectures, initializations, or training procedures such that bad reconstruction errors do not occur. This prescription is quite different from existing remedies in the literature such as KL warm-start, which can help with relatively shallow models, but which is often incapable of significantly improving the deeper models we consider.\\n\\nWe can expand the discussion section (Section 7 of the submission) to more comprehensively present these points. We can also reinforce other details such as the independent value of Propositions 4.1 and 5.1 in building upon existing results in the literature.\\n\\n---------------------------\"}",
"{\"title\": \"Response to Reviewer 2 comments (Part I)\", \"comment\": \"Thanks for the constructive comments pertaining to our submission. Please let us know if there are any unresolved concerns.\\n\\n*** Response to Reviewer 2 comments ***\\n---------------------------\\n\\nQuestion a. (first part): \\\"I would like to understand the use of \\u201clocal optima\\u201d here. I think the paper specifically investigate local optima of the likelihood noise variance, and there are potentially other local optima.\\\"\", \"response\": \"The class of minimum defined by category (ii) posterior collapse is where we explicitly set gamma to some fixed value that happens to be too large (i.e., gamma is not learned) and then train all other parameters. This class of minimum can be avoided by simply treating gamma as a free parameter that can be learned along with all the others as mentioned in Section 3. In contrast, the more insidious type of local minima we highlight in our submission occurs when gamma is simultaneously learned from the data along with all other parameters, but gets stuck at a large value because of poor reconstructions from deep autoencoder architectures. This is the category (v) situation. And for reasons described in Section 5, simply forcing gamma to be smaller does not solve this issue.\\n\\n---------------------------\"}",
"{\"title\": \"Response to Reviewer 1 comments (Part II)\", \"comment\": \"\", \"question\": \"\\\"I think the authors should think about the cases where the reconstruction error is low, and see if there is an issue of posterior collapse in those setups.\\\"\", \"response\": \"Our focus has been on isolating and examining category (v) posterior collapse, as presently this form of collapse stands in the way of handling high-dimensional data and there is currently limited analysis or understanding of this phenomena. However, if the reconstruction error is low, the only type of posterior collapse that is possible is category (i), and this form of collapse is actually beneficial for downstream tasks like generating good samples as mentioned in Section 3.\\n\\n---------------------------\"}",
"{\"title\": \"Response to Reviewer 1 comments (Part I)\", \"comment\": \"Thanks for the constructive comments pertaining to our submission. Please let us know if there are any unresolved concerns.\\n\\n*** Response to Reviewer 1 comments ***\\n---------------------------\", \"question\": \"\\\"I don't think I am on board with continuing to use the standard Gaussian prior. Several papers (such as the cited Vampprior paper) showed that one can very successfully use GMM like priors, which reduces the burden on the autoencoder.\\\"\", \"response\": \"It is critical to point out that more sophisticated latent space priors (e.g., Vampprior) do not actually mitigate the type of posterior collapse we highlight in this paper. In fact, VAE models will be similarly vulnerable to category (v) posterior collapse whether a Gaussian prior is used or not. This is because to improve the reconstruction error for sophisticated high-dimensional data, a deeper or more complex decoder will be needed, and this can introduce a new constellation of bad local minima as we have shown. A trainable non-Gaussian prior merely grants greater flexibility to modeling the aggregated posterior in the latent space, but bad local minima from deep decoders remain problematic. Of course models like Vampprior can still be very helpful in generating better samples, but only when paired with a deep architecture capable of good reconstructions.\\n\\nNote also that in (Tomczak and Welling, 2018), the Vampprior model is only tested on small black-and-white images, and so a deep decoder is not even needed. Another more recent representative example is Bauer and Mnih, \\\"Resampled Priors for Variational Autoencoders,\\\" AISTATS 2019, which again, involves shallow models and simple images. While both of these papers (and others like them) involve elegant procedures for learning richer priors, this is a separate issue from our submission. In particular, the category (v) posterior collapse we address will not generally materialize until larger, more complex decoders are adopted as required for dealing with high-resolution color images and the like.\\n\\n---------------------------\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper tries to establish an explanation for the posterior collapse by linking the phenomenon to local minima. If I am understanding correctly, the final conclusion reads along the lines of 'if the reconstruction error is high, then the posterior distribution will follow the prior distribution'. They also provide some experimental data to suggest that when the reconstruction error is high, the distribution of the latents tend to follow the prior distribution more closely.\\n\\nAlthough I really liked section 3 where authors establish the different ways in which `posterior collapse' can be defined, overall I am not sure if I can extract a useful insight or solution out of this paper. When the reconstruction error is large, the VAE is practically not useful. \\n\\nAlso, I don't think I am on board with continuing to use the standard Gaussian prior. Several papers (such as the cited Vampprior paper) showed that one can very successfully use GMM like priors, which reduces the burden on the autoencoder. Even though I liked the exposition in the first half of the paper, I don't think I find the contributions of this paper very useful, as one can actually learn the prior and get good autoencoder reconstructions while obtaining a good match between the prior and the posterior, without having degenerate posterior distribution which is independent from the data distribution. All in all, I think using a standard gaussian prior is not a good idea, and that fact renders the explanations provided in this paper obsolete in my opinion. Is there any reason why we would want to utilize a simplistic prior such as the standard Gaussian prior? Do you have any insights with regards to whether the explanations in this paper would still hold with more expressive prior distributions? \\n\\nIn general, I have found section 5 hard follow. And to reiterate, the main arguments seem to be centered around autoencoders which cannot reconstruct well, as the authors also consider the deterministic autoencoder. If the autoencoder can not reconstruct well, it is not reasonable to expect a regularized autoencoder such as VAE to reconstruct, better, and therefore the VAE is already is a regime where it is not useful anyhow. I think the authors should think about the cases where the reconstruction error is low, and see if there is an issue of posterior collapse in those setups.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis paper is clearly written and well structured. After categorizing difference causes of posterior collapse, the authors present a theoretical analysis of one such cause extending beyond the linear case covered in existing work. The authors then extended further to the deep VAE setting and showed that issues with the VAE may be accounted for by issues in the network architecture itself which would present when training an autoencoder.\", \"overall\": \"1) I felt that Section 3 which introduces categorizations of posterior collapse is a valuable contribution and I expect that these difference forms of posterior collapse are currently under appreciated by the ML community. I am not certain that the categorization is entirely complete but is nonetheless an excellent step in the right direction. One source of confusion for me was the difference between sections (ii) and (v) --- in particular I believe that (ii) and (v) are not mutually exclusive.\\n\\nAdditionally, the authors wrote \\\"while category (ii) is undesirable, it can be avoided by learning $\\\\gamma$\\\". While this is certainly true in the affine decoder case it is not obvious that this is true in the non-linear case.\\n\\n2) Section 4 provides a brief overview of existing results in the affine case and introduces a non-linear counter-example showing that local minima may exist which encourage complete posterior collapse.\\n\\n3) On the proof of Proposition 4.1. In A.2.1 you prove that there exists a VAE whose ELBO grows infinitely (exceeding the local maxima of (7)). While I have been unable to spot errors in the proof, something feels odd here. In particular, the negative ELBO should not be able to exceed the entropy of the data which in this case should be finite. I've been unable to resolve this discrepancy myself and would appreciate comments from the authors (or others). The rest of the proof looks correct to me.\\n\\n4) I felt that section 5 was significantly weaker than the rest of the paper. This stemmed mostly from the fact that many of the arguments were far less precise and less rigorous than those preceding. I think the presentation of this section could be significantly improved by focusing around Proposition 5.1.\\n\\na) Section 5 depends on the decoder architecture being weak, though this is not clearly defined formally. I believe this is a sensible restriction which enables analysis beyond the setting of primary concern in Alemi et al. (and other related work).\\n\\nb) In the third paragraph, you write \\\"deep AE models can have bad local solutions with high reconstruction [...]\\\". I feel that this doesn't align well with the discussion in this section. In particular, I believe it would be more accurate to say that IF the autoencoder has bad local minima then the VAE is also likely to have category (v) posterior collapse.\\n\\nc) Equation (8) feels a little too imprecise. Perhaps this could be formalized through a bias-variance decomposition of the right hand side similar to Rolinek et al.?\\n\\nd) The discussion of optimization trajectories was particularly difficult to follow. It is inherently difficult to reason about the optimization trajectories of deep auto-encoding models and is potentially dangerous to do so. For example, perhaps the KL divergence term encourages a smoother loss landscape and encourages the VAE to avoid the local stationary points that the auto-encoder falls victim to.\\n\\ne) It is written, \\\"it becomes clear that the potential for category (v) posterior collapse arises when $\\\\epsilon$ is large\\\". This is not clear to me and in fact the analysis seems more indicative of collapse presented in category (ii) (though as mentioned above, I am not convinced these are entirely separate). Similarly, later in this section it is written, \\\"this is more-or-less tantamount to category (v) posterior collapse\\\". I was also unable to follow this reasoning.\\n\\nf) \\\"it is actually the AE base architecture that is effectively the guilty party when it comes to posterior collapse\\\". If the conclusions are to be believed, this only applies to category (v) collapse.\\n\\ng) Unfortunately, I did not buy the arguments surrounding KL annealing at the end of section 5. In particular, KL warm start will change the optimization trajectory of the VAE. It is possible that the VAE has a significantly worse loss landscape than the autoencoder initially and so warm-start may enable the VAE to escape this difficult initial region.\", \"minor\": [\"The term \\\"VAE energy\\\" used throughout is not typical within the literature and seems less explicit than the ELBO (e.g. it overlaps with energy based models).\", \"Equation (4) is missing a factor of (1/2).\", \"Section 3, in (ii), typo: \\\"assumI adding $\\\\gamma$ is fixed\\\", and \\\"like-likelihood\\\". In (v), typo: \\\"The previous fifth categories\\\"\", \"Section 4, end of para 3, citep used instead of citet for Lucas et al.\", \"Section 4, eqn 6 is missing a factor of 1/2 and a log(2pi) term.\", \"Section 5, \\\"AE model formed by concatenating\\\" I believe this should be \\\"by composing\\\".\", \"Section 5, eqn 10, the without $\\\\gamma$ notation is confusing and looks as though the argmin does not depend on gamma. Presumably, it would make more sense to consider $\\\\gamma^*$ as a function of $\\\\theta$ and $\\\\phi$.\", \"Section 5 \\\"this is exactly analogous\\\". I do not think this is _exactly_ analogous and would recommend removing this word.\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"1. Summary\\nThe paper theoretically investigates the role of \\u201clocal optima\\u201d of the variational objective in ignoring latent variables (leading to posterior collapse) in variational autoencoders. The paper first discusses various potential causes for posterior collapse before diving deeper into a particular cause: local optima. The paper considers a class of near-affine decoders and characterise the relationship between the variance (gamma) in the likelihood and local optima. The paper then extends this discussion for deeper architecture and vanilla autoencoders and illustrate how this can arise when the reconstruction cost is high. The paper considers several experiments to illustrate this issue.\\n\\n2. Opinion and rationales\\nI thank the authors for a good discussion paper on this important topic. However, at this stage, I\\u2019m leaning toward \\u201cweak reject\\u201d, due to the reasons below. That said, I\\u2019m willing to read the authors\\u2019 clarification and read the paper again during the rebuttal to correct my misunderstandings if there is any. The points below are all related.\\n\\na. I would like to understand the use of \\u201clocal optima\\u201d here. I think the paper specifically investigate local optima of the likelihood noise variance, and there are potentially other local optima. Wouldn\\u2019t this be an issue with hyperparameter optimisation in general? For example, for any regression tasks, high observation noise can be used to explain the data and all other modelling components can thus be ignored, so people have to initialise this to small values or constrain it during optimisation.\\n\\nb. I think there is one paper that the paper should discuss: Two problems with variational expectation maximisation for time-series models by Turner and Sahani. In this paper, the paper considers optimising the variational objective wrt noise likelihood hyperparameters and illustrates the \\u201cbias\\u201d issue of the bound towards high observation noise.\\n \\nc. I think it would be good to think about the intuition of this as well: \\u201cunavoidably high reconstruction errors, this implicitly constrains the corresponding VAE model to have a large optimal gamma value\\u201d: isn\\u2019t this intuitive to improve the likelihood of the hyperparameter gamma given the data?\\n\\nd. If all above are sensible and correct, I would like to understand the difference between this class of local minima and that of (ii). Aren\\u2019t they the same?\\n\\ne. The experiments consider training AEs/VAEs with increasingly complex decoders/encoders and suggest there is a strong relationship between the reconstruction errors in AEs and VAEs, and this and posterior collapse. But are these related to the minima in the decoder\\u2019s/encoder\\u2019s parameter spaces and not the hyperparameter space? So that is the message that the paper is trying to convey here?\\n\\n3. Minor:\\n\\nSec 3\\n(ii) assumI -> assuming\\n(v) fifth -> four, forth -> fifth\"}"
]
} |
SyxIterYwS | Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents | [
"Ruihan Zhao",
"Stas Tiomkin",
"Pieter Abbeel"
] | Mutual Information between agent Actions and environment States (MIAS) quantifies the influence of agent on its environment. Recently, it was found that intrinsic motivation in artificial agents emerges from the maximization of MIAS.
For example, empowerment is an information-theoretic approach to intrinsic motivation, which has been shown to solve a broad range of standard RL benchmark problems. The estimation of empowerment for arbitrary dynamics is a challenging problem because it relies on the estimation of MIAS. Existing approaches rely on sampling, which have formal limitations, requiring exponentially many samples. In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS. The core idea is to represent the relation between action sequences and future states by a stochastic dynamical system in latent space, which admits an efficient estimation of MIAS by the ``Water-Filling" algorithm from information theory. We construct this embedding with deep neural networks trained on a novel objective function and demonstrate our approach by numerical simulations of non-linear continuous-time dynamical systems. We show that the designed embedding preserves information-theoretic properties of the original dynamics, and enables us to solve the standard AI benchmark problems. | [
"intrinsic motivation",
"empowerment",
"latent representation",
"encoder"
] | Reject | https://openreview.net/pdf?id=SyxIterYwS | https://openreview.net/forum?id=SyxIterYwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3me0QzD4aS",
"BylFEjQ3ir",
"Bkx5rSQhsr",
"ryx_eZX3sS",
"HJeX20UYqB",
"S1xfo2e9tH",
"SJlRd9oGYr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749058,
1573825329135,
1573823810446,
1573822703772,
1572593322644,
1571585177548,
1571105397961
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2435/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2435/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2435/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2435/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a novel method for embedding sequences of states and actions into a latent representation that enables efficient estimation of empowerment for an RL system. They use empowerment as intrinsic reward for safe exploration. While the reviewers agree that this paper has promise, they also agree that it is not quite ready for publication in its current state. In particular, the paper is lacking a theoretical justification for the proposed approach, the definition of empowerment used by the authors raised questions, and the manuscript would benefit from more clear and detailed description of the method. For these reasons I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We appreciate your feedbacks and they helped in our revision of the paper. We focused on making the terms clearer and more intuitive. Here are some additional clarifications:\\n\\n#1 Empowerment is defined as the maximal of mutual information between action sequences and resulting final states. In previous studies on empowerment in dynamical systems, cited in the paper, it was shown that empowerment can be used as a utility function for stabilization of non-linear dynamical systems. Stabilization with empowerment assumes two policies: $\\\\omega$, and $\\\\pi$. The former is a probing policy, serving for the estimation empowerment, Eq, (1). The latter is a control policy which can be computed by any reinforcement learning approach, given empowerment values. In this work we show an efficient method for the estimation of empowerment in unknown dynamics. The main contribution of this paper is a novel scheme for embedding raw images to the latent space, where the challenging optimization problem given by Eq. (1), is transformed to a convex optimization problem, given by Eq. (4). This optimization problem allows us to find a solution to Eq. (1) by a super-efficient, line search. This problem is known as Water-Filling, which is given by Eq. (5). In the paper, we provided a reference for the detailed derivation of the solution to this problem, (Cover & Tomas, 2012).\\n\\n#2 It was shown previously, as cited in the paper, that empowerment is decreased when an agent has less control over its environment. This might happen e.g., in narrow tunnels, and in proximity to obstacles. We utilized this property to address safety, assuming that when an agent lacks control it cannot prevent dangerous situations, and as a result, is more vulnerable. As mentioned in the introduction, empowerment is directly related to the diversity of achievable states, which also is seen by the formal definition of empowerment, Eq (1).\\n\\n#3 d_z and d_b refer to the dimension of latent spaces. They are first introduced in Section 4.2\\n\\nThanks again for pointing out the issues with our paper. As we have revamped our writing significantly, we will appreciate a second thought on the rating.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"We appreciate your feedbacks regarding which we have significantly revamped the original submission.\\n\\n#1. We added more intuitive explanations in the abstraction and introduction. Hope that they will help make the paper easier to read.\\n\\n#2. In the introduction and proposed approach section, we added more explanations on how different parts of the design come together. Essentially, our method fills in the hole of existing algorithms for empowerment and tries to combine the strengths of each of them.\\n\\n#3. We carefully reviewed our overall writing. We\\u2019ve made the texts much more precise and direct.\\n\\n#4. We elaborated on existing experiments to better address their implications.\\n\\nWe will appreciate your second thought on your rating.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We appreciate the insightful feedback you provide which can help us communicate the work more effectively. We have posted a revision of our work and additional explanations below.\\n\\n#1. You expressed a concern about the validity of our overall scheme and optimization objectives.\\nTo make things clear, we list the key stages in our workflow:\\n(1) Estimation of empowerment from trajectories\\n (a) Fitting the representations z, b and latent dynamics such that is linear in b\\n (b) Compute empowerment by 'Water-Filling'\\n(2) Use the empowerment values as intrinsic reward for control tasks\", \"key_definition\": \"Empowerment is defined as the maximal mutual information (channel capacity) between action sequences and future states. Given a specific horizon T, this quantity is a function of state z. It does not depend on trajectory or actions. Using it as intrinsic reward, we encourage the agent to operate in a more controllable region. Note that (1-a), (1-b) and (2) each defines a different optimization problem. As shown in Figure 1, these three optimizations are done independently, and they are, by design, separate steps.\\n\\n#2. You pointed out the lack of verification for the linear model. The revision emphasizes such verifications. First, we verify the model by reconstruction (in Figure 3) The catch here is that, as shown in Figure 2, encoders and decoders are never trained to optimize for reconstruction. This means the prediction of our linear model is accurate, and properly represents the original nonlinear dynamics. Second, we compare our empowerment estimation with previous works on the nonlinear inverted pendulum, as shown in Figure 4. Solving for channel capacity is not a trivial task, and in our case, it relies on a high-quality linear dynamics model. The fact that our method was able to create an empowerment value plot similar to that of previous analytical method justifies the validity of our linear model. Finally, previous work shows that pendulum learns to balance at the top using only empowerment signal. Since we achieve the same behavior using our empowerment estimation (shown by the black trajectory in Figure 4), we know that our entire method is valid. \\n\\n#3. Thanks for pointing out the typo. \\n\\n#4. You pointed out that the explanation for Figure 4 is not sufficient. Additional explanation has been added. \\n\\n#5. We use this experiment as another testbed to verify our empowerment estimation. The relation between empowerment and the safety of the agent is established in previous works: https://doi.org/10.3389/frobt.2017.00025, https://doi.org/10.7551/978-0-262-31709-2-ch018 Our intention was not to reiterate the old findings, but for this particular experiment, safety can be interpreted in this way: When the agent collides with the wall, its control is less effective (low empowerment). Since collision is dangerous, we prefer states with high empowerment and avoids the tunnel, where its empowerment reduces. Our empowerment estimation successfully identifies areas with more options and avoids narrow tunnels. This shows that the underlying representation of the dynamics system is valid.\\n\\n#6. Our model, A(z), given by Eq. (3), is an interpretable representation of an original nonlinear dynamics. The advantage of our representation is that it allows us to estimate the maximal mutual information between input and output efficiently. In this sense, this submission introduces, for the first time, an optimal representation learning of an arbitrary dynamic from observations only. In contrast to the previous works, the current work estimates mutual information of nonlinear dynamics by convex optimization, rather than by Monte Carlo sampling. We believe this submission perfectly match the core theme of ICLR conference. \\n\\n#7 We referred to the fact that any high-confidence distribution-free lower bound of mutual information requires an exponential number of samples. In this work we propose a framework how to approximate the maximal mutual information between action sequences and future states overcoming the requirement on an exponential number of samples.\\n\\n#8 Thanks for pointing out some very relevant sources!\\n\\nWe appreciate your second thought on your rating of our work!\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an approach based on empowerment for reinforcement learning applicable to the cases that the dynamical system is unknown. The model is estimated by a water filling algorithm and is evaluated on two RL tasks.\\n\\nTraining of RL agents via on empowerment and intrinsic\\u00a0rewards is an important alternative to conventional\\u00a0training algorithms. The paper is tacking an important paper.\\u00a0\\nThe paper is weak in terms of writing and motivation. Empowerment on section 3.2 could have been explained more intuitive and more thoroughly the make the paper self-contained.\\nMoreover, the paper lacks motivation of the design\\u00a0choices. It seems to be a combination of a few recent techniques in machine learning or statistics\\u00a0that are mechanically attached to each other without sufficient justification or intuition.\\u00a0\\nThe paper keeps claiming to solve AI however what it actually experimented on RL safety and/or one synthetic environment. They are not really \\\"AI benchmark problems\\\". I'd rather the paper focuses on its contribution: direct and concise.\\u00a0\\nMoreover, the experiments are not sufficient to support the paper or investigate how much each part is contributing\\u00a0to the success.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to take advantage of a known result on the channel capacity of a linear-Gaussian channel in order to estimate the empowerment and maximize mutual information between policies (action sequences) and final states (given initial states). The idea is to map the raw action sequences and states to a latent space where learning would force that linear property to be appropriate.\\n\\nI like the general idea of the paper (as stated above) along with its objectives but I have several concerns.\\n\\nFirst, I need to be reassured that we are computing the right quantity. Channel capacity is the maximum mutual information (between inputs and outputs) over the input distribution, whereas I had the impression that empowerment would be this mutual information, and that we want to increase it, but not necessarily reach its maximum over all possible policies: it would usually be one of the terms in an objective function (e.g. here we have reconstruction error, and in practice there would be some task to solve in addition to the exploration reward). One way to see this problem in the given formulation is that the C* objective only depends on the matrix A (which encapsulates the conditional density of z_{t+1} given the trajectory) and it does not depend at all on the distribution of the trajectory itself! This is weird since if we are going to use this as reward the objective is to improve the trajectory. What's the catch? So either I misunderstand something (which is quite possible) or there is something seriously wrong here.\\n\\nI am assuming that the training objective for the encoder is a sum of the reconstruction error and of C*. But note how this does not give a reward for policies, as such. This is a bit strange if the goal is to construct an exploratory reward!\", \"a_less_radical_comment_is\": \"have you verified that the linear relationship between z_{t+k} and the b actually holds well? In other words, is the encoder able to map the raw state and actions to a space where the linearity assumption is correct, and thus where equation (3) is satisfied.\\n\\nFigure 3 has something weird, probably one of the two sequences of -1's should be a sequence of +1's.\\n\\nFigure 4 is difficult to interpret, the caption should do a better job.\\n\\nThe experiment on the safety of the RL agent is weak. I don't see the longer path as safer, here. And the results are not very impressive, since the agent is only doing what it's told, i.e., go in areas with more options (more open areas) but there is no reason to believe that this is advantageous, here.\\n\\nFinally, what would make this paper much more appealing is if the whole setup led to learning better high-level representations (how to measure that is another question, but it is a standard kind of question in representation learning papers).\", \"related_work\": \"I don't understand why in the abstract the authors refer to sampling-based methods as requiring exponentially many samples. This is not generally the case for sampling based methods (e.g. think of VAEs). I suppose the authors refer to something in particular but it was not clear to me what.\", \"references\": \"in the intro, you might want to refer to the contrastive methods and variational methods to maximize mutual information between representations of the state and representations of the actions, e.g.,\\n Thomas et al, 2018, 1802.09484\\n Kim et al, 2018, arXiv:1810.01176\\n Warde-Farley et al, 2018, arXiv:1811.11359\", \"evaluation\": \"for now I suggest a weak reject but I am ready to modify my score if I am convinced that my main concerns were unfounded.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a model to embed states and action sequences into latent spaces in order to enable efficient estimation of empowerment in a reinforcement learning system.\\n\\nThe paper shows some interesting experimental results. But overall, it is not ready to publish yet for the following reasons: \\n\\n1. The technical section is lack of important description/theorems/derivation/etc that are necessary to support the claims. \\n\\nWhat is the detailed definition of empowerment, i.e., how to spell out the formula of mutual information I? What is the distribution of action sequences and states? What is the policy function \\\\pi? How is \\\\pi related to or different from \\\\omega? How is the learned empowerment used in (training) policy? \\n\\nThe authors proposed a parametrization in equation-3 for state transition and claimed that this parametrization yields ``an efficient estimation of MIAS as explained in the next section\\u2019\\u2019, but did not give any explanation throughout the paper. Does it mean the equation-4 is easy to solve? Does equation-4 end up that way because of the parametrization in equation-3? Why so? \\n\\nWhat is the water-filling algorithm? How does it associate the capacity with the empowerment? Elaboration is needed for this bit. \\n\\nThe authors claim that putting more weight on empowerment in reward ends up in a more conservative policy, but didn\\u2019t give any technical justification. They indeed mention that ``a state is intrinsically safe for an agent when the agent has a high diversity of future states\\u2019\\u2019 and that ``the higher its empowerment value, the safer the agent is\\u2019\\u2019. The former is a hypothesis and the latter needs technical/derivation support---e.g., why empowerment is correlated with the diversity? The interesting experimental results in figure-6 seem to support the authors\\u2019 claim, but precise technical justification is needed. \\n\\n2. Some technical description/argument should be more precise and accurate. \\n\\nThe authors claim that they ``observes the current state through its visual sensors\\u2019\\u2019---but the actual state (i.e. the exact angle and height) can\\u2019t be observed and the visual sensor data is only an approximation, so the correct claim should be something like: we observe the visual representation of the actual state. \\n\\nThe authors claim that the ``existence of such representation ... is one of the contributions of our work\\u2019\\u2019---the existence of something shouldn\\u2019t be a contribution of a technical paper, what can be a contribution is the proof of its existence. \\n\\nThe authors claim that they ``inject Gaussian noise \\u2026 into the latent space.\\u2019\\u2019 This is very confusing: it sounds like (1) there wasn\\u2019t any randomness in this formulation; (2) the estimation of empowerment is difficult because of that and (3) the authors added the Gaussian noise to enable the efficient estimation. However, based on my understanding after reading multiple times, I guess what actually happened is: (1) there should be randomness and the noise can be anyway distributed and (2) the authors assumed it is Gaussian so it is simple enough to yield an efficient estimation. The authors should really clarify this. \\n\\n3. There are also some typos that may confuse readers. \\n\\nThe authors mentioned that ``a linear Gaussian channel given by Eq. 4.3\\u2019\\u2019---is it section-4.3 or equation-3?\\n\\nIn appendix, what is d_z and d_b?\"}"
]
} |
SJxrKgStDH | SCALOR: Generative World Models with Scalable Object Representations | [
"Jindong Jiang*",
"Sepehr Janghorbani*",
"Gerard De Melo",
"Sungjin Ahn"
] | Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the first unsupervised object representation model shown to work for natural scenes containing several tens of moving objects. | [
"scalor",
"objects",
"generative world models",
"scene",
"complex dynamic backgrounds",
"terms",
"object density",
"primary challenge"
] | Accept (Poster) | https://openreview.net/pdf?id=SJxrKgStDH | https://openreview.net/forum?id=SJxrKgStDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"7Cc6dSGla",
"ByxZ6fOhsr",
"BJxxoMOnjB",
"r1l07GO2iS",
"SyxDkz_2iB",
"ryluTWunir",
"r1lOKCwhjS",
"H1x79aD2jB",
"Byl5Um4m5B",
"rJx4wc3TKB",
"ryeAO_hsYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798749027,
1573843640670,
1573843607739,
1573843493918,
1573843422598,
1573843391943,
1573842560186,
1573842315241,
1572189009651,
1571830364441,
1571698806510
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2433/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2433/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2433/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"After the author response and paper revision, the reviewers all came to appreciate this paper and unanimously recommended it be accepted. The paper makes a nice contribution to generative modelling of object-oriented representations with large numbers of objects. The authors adequately addressed the main reviewer concerns with their detailed rebuttal and revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Blind Review #2 (1/2)\", \"comment\": \"** Baseline: SQAIR\\n\\n> Regarding the issue of baselines, please first refer to the relevant answer to reviewer 1. \\n\\n** Baseline: Social LSTM, RNEM, and HART\\n\\nThank you for pointing out these approaches. We notice that the reviewer mentions Kosiorek et al. (2018) in the main text but refers to Hierarchical Attentive Recurrent Tracking (HART, 2017) in the reference. We believe that the reference to HART is a mistake and the reviewer intended to point to SQAIR. (But we still include HART in the following discussion.) The four methods mentioned in the review are SQAIR, Social LSTM, RNEM, and HART. For the most related work SQAIR, please refer to the relevant answer to reviewer 1. \\n\\nFor the other three approaches, we decided not to include them in our experiment for the following reasons. First, all three methods are deterministic (no uncertainty), while SCALOR and SQAIR are probabilistic latent variable models. Second, both Social LSTM and HART are supervised methods, while SCALOR and SQAIR are unsupervised. Social LSTM requires object coordinates as input instead of raw images, while SQAIR and SPAIR use only raw images as input. Thus, it is not practical to compare an unsupervised model to supervised models.\\n\\nRNEM [1], while unsupervised, is a deterministic model targeting a different task than our work. In particular, methods such as RNEM, IODINE [2], MoNet [3], and GENESIS [4] are scene decomposition methods constructing a scene via mixture components. SCALOR, SQAIR, SPAIR [5] and AIR [6] take a different approach based on attention (using bounding boxes) to learn \\u201cobject representations\\u201d rather than a \\u201cscene decomposition\\u201d. Although they are relevant, these are actually quite different approaches. For example, the scene decomposition methods do not provide any explicit positions of objects, do not provide its bounding box, and do not explicitly provide counts of objects, or object-level appearance representations (but it is a scene decomposition level). Thus, we cannot measure tracking metrics. In contrast, the \\u201cobject-oriented\\u201d methods normally cannot cope with full scenes with backgrounds, but only focus on spatially local objects \\u2013 SCALOR is the first model among the object-oriented methods that can deal with backgrounds. Due to this significant difference between these two tasks, no paper has compared both lines of work as baselines. Rather, the comparison is made within the relevant line of work. For example, IODINE is compared to RNEM but not to AIR. NEM [7] also is not compared to AIR. Similarly, SPAIR is compared against AIR but not to NEM or RNEM. Therefore, it seems that a comparison to SQAIR and VRNN (for generation quality measured by NLL) are the only appropriate baselines focusing on the same task setting. In the revision, we thus focus on comparing our method to SQAIR and VRNN.\\n\\n[1] Van Steenkiste, S., Chang, M., Greff, K., & Schmidhuber, J. (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353.\\n\\n[2] Greff, K., Kaufmann, R. L., Kabra, R., Watters, N., Burgess, C., Zoran, D., ... & Lerchner, A. (2019). Multi-object representation learning with iterative variational inference. arXiv preprint arXiv:1903.00450.\\n\\n[3] Burgess, C. P., Matthey, L., Watters, N., Kabra, R., Higgins, I., Botvinick, M., & Lerchner, A. (2019). Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390.\\n\\n[4] Engelcke, M., Kosiorek, A. R., Jones, O. P., & Posner, I. (2019). GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations. arXiv preprint arXiv:1907.13052.\\n\\n[5] Crawford, E., & Pineau, J. (2019). Spatially invariant unsupervised object detection with convolutional neural networks. In Proceedings of AAAI.\\n\\n[6] Eslami, S. A., Heess, N., Weber, T., Tassa, Y., Szepesvari, D., & Hinton, G. E. (2016). Attend, infer, repeat: Fast scene understanding with generative models. In Advances in Neural Information Processing Systems (pp. 3225-3233).\\n\\n[7] Greff, K., van Steenkiste, S., & Schmidhuber, J. (2017). Neural expectation maximization. In Advances in Neural Information Processing Systems (pp. 6691-6701).\"}",
"{\"title\": \"Response to Blind Review #2 (2/2)\", \"comment\": \"** Motivation\\n\\nThanks for pointing this out. We agree with the suggestion, and will provide a clearer presentation of the motivation in our revision. We believe that the reviewer agrees with our perspective that it is not unusual to encounter natural scenes with many (tens or hundreds of) objects. The question is whether we need to explicitly attend to all of these or not. Regarding this, our perspective aligns with the reviewer\\u2019s argument that it is required for superhuman performance. Although we also agree with the limitation of the human attention capacity, we do not see a reason why we ought to artificially impose such a limitation in an AI system. For example, contemporary self-driving technology heavily takes advantage of this superhuman level of attention/detection capacity, although those approaches are based on expensive supervised learning systems. As another example, due to the limited capacity of human attention, we as humans need to resort to (sequential) search or scanning to find something among many objects, which is clearly more time-consuming than a parallel search. Also, considering that the architecture of modern computers is optimized for parallel processing, we believe that a contemporary AI system should maximize the utility of parallel processing. The parallel attention on the possible strategies used in the Monte-Carlo Search of the AlphaGo system is another strong evidence supporting this (human Go-players are limited in this ability.) We are grateful for the provided arguments on human attention and other examples. This is an interesting point and we will include it in our revision. \\n\\n** Interactions between entities\\n\\nWe agree that modeling interaction would indeed lead to a more comprehensive model. The main focus of this paper, however, is to make the current state-of-the-art (SQAIR) more scalable beyond just 3-4 objects. This focus on scalability, while not considering interaction, seems fairly reasonable, given that SQAIR also does not possess any interaction modeling (note that SQAIR has a relation module in its architecture but it is not for modeling interactions but rather for more accurate tracking, and their experiments do not show interaction.). Indeed, adding interaction modeling, e.g,. using a graph network of the trackers, will be a key point for future work following the present work. \\n\\n** What if the grid cells were smaller than the objects\\n\\nBecause the encoder is a deep CNN, the input receptive field corresponding to a grid cell is actually a quite large area of the input image. The encoder learns which part of the input image a grid cell should represent, and thus, it is actually not a problem. The proposed method will still model an object entirely when the object can occupy more than one cell. In fact, in our Low Density (LD) experiment setting and the additional Very Low Density (VLD) setting that will be added in the revised version, the object size (24*24) is larger than the cell size (8*8).\\n\\n** About the fundamental reason the proposed method models the object and not the empty spaces\\n\\nOne of the most fundamental reasons is that the model is inclined to predict as few bounding boxes as possible in the image. As we can see in the paper, in both the generation process and the inference process, we use \\ud835\\udc33\\ud835\\udc5d\\ud835\\udc5f\\ud835\\udc52\\ud835\\udc60 to control whether we will generate the appearance latent variable \\ud835\\udc33\\ud835\\udc64\\u210e\\ud835\\udc4e\\ud835\\udc61 and the location latent variable $\\\\mathbf{\\ud835\\udc33}^\\\\text{\\ud835\\udc64\\u210e\\ud835\\udc52\\ud835\\udc5f\\ud835\\udc52}$. If $\\\\mathbf{\\ud835\\udc33}^\\\\text{pres} = 1$, we introduce new KL terms for $\\\\mathbf{\\ud835\\udc33}^\\\\text{what}$ and $\\\\mathbf{\\ud835\\udc33}^\\\\text{\\ud835\\udc64\\u210e\\ud835\\udc52\\ud835\\udc5f\\ud835\\udc52}$ in the ELBO. This is equivalent to increasing the loss in the objective function. Moreover, to penalize unnecessary bounding boxes, we also assume a low probability of object existence in the prior distinction in the ELBO to further encourage fewer boxes. Therefore, to reduce the KL loss while maintaining a good reconstruction, the model converges to a behavior that learns the object representations properly.\"}",
"{\"title\": \"Response to Blind Review #1 (1/3)\", \"comment\": \"The review text states that \\u201cthe method seems to work\\u201d based on the qualitative results and the limited quantitative evaluation. This raises a discussion about the contribution of this paper.\\n\\nRegarding the quantitative comparison, our response is two-fold. \\n\\n** Novelty. R1 claims \\u201cThe main contribution is in improving the efficiency of object detection/tracking by parallelizing the computation without much conceptual innovation.\\u201d \\n\\nAlthough at first glance, one might assume that the parallelization may be a simple adaptation, we are not simply implementing a parallelization of a sequential model whose parallelization is already straightforward. Instead, we *identify the specific reasons that enable a parallelization* and then actually *make it work* with our own new observations, investigations, analysis, and experiments. This is not a minor contribution because, in the SPAIR paper, the authors actually state that the sequential computation (within an image) is crucial to obtain the desired results. Thus, given this previous state of the art, it is not trivial to devise a parallel algorithm without substantial conceptual innovation. Specifically, through our analysis and investigation, we first observe that an efficient parallelization that does not degrade the results is actually feasible, contrary to what was previously believed. Our new findings include (1) that in (physical) spatial space, two objects cannot exist in the same position, and, hence, the relation and interference from other objects should not be severe; (2) that considering the bottom-up encoding conditioning on the input image, each object should know what is happening in its immediate surroundings and thus should not need to communicate; and (3) that in the temporal general setting, the past behavior (trajectory) of an object ought to provide strong signals for the inference of an object\\u2019s latent in the future time step. (We will clarify these points more thoroughly in the revision.) Based on this reasoning, questioning the conclusion in the SPAIR paper, we propose our novel parallelization approach and show empirically that our insights and reasoning are correct, as R1 agrees that \\u201cthe method seems to work.\\u201d Importantly, the SPAIR authors also confirmed via personal communication that they also recently realized that parallelization without performance degradation is possible even if they didn\\u2019t know it when they published SPAIR. This confirms that our findings constitute new knowledge correcting a false narrative on an important problem. \\n\\nIn any case, the principal contribution of this paper should be considered from the perspective of sequential modeling. As described in the paper, it is highly non-trivial to make a sequential approach scalable. This is mainly because of the problem of combining a set of propagated objects with a newly discovered set of objects. This bipartite matching problem in object-oriented sequential representation learning has not been noticed in the community before because SQAIR is fully sequential for processing these objects. We found that this is an important problem to deal with in order to scale up the model beyond the previous state-of-the-art of operating on just a few objects. To resolve this, we devise our proposal-and-rejection mechanism, which may be considered as a key contribution along with the identification of the problem itself. Furthermore, demonstrating the feasibility of scaling up such a model to nearly a hundred objects with dynamic background as well as a complex natural scenes should be another dimension of the contribution, considering that the previous state-of-the-art involved operating on a few MNIST digits.\\n\\n** Comparison to SPAIR (\\u201cnot compared against SPAIR\\u201d)\\n\\nUnlike SQAIR, SPAIR is not a temporal model. It only works on static images, not on video feeds. Hence, it cannot track objects, but would need to re-discover objects for each image without providing any tracking information. It does not deal with propagation and discovery (everything should be re-discovered). It does not deal with the background. Given all these reasons, despite our discovery mechanism being partly inspired by SPAIR, a comparison with SPAIR is not an obvious point for evaluation regarding our claimed contributions in the paper. Note that our main claim is that the parallel discovery combined with temporal propagation modeling should be better than SQAIR. We thus believe, as pointed out by R1, that a comparison to SQAIR is a more reasonable one, which we further explain below.\"}",
"{\"title\": \"Response to Blind Review #1 (2/3)\", \"comment\": \"** Baseline (Comparison to SQAIR)\\n\\nWe entirely agree that SQAIR should be our baseline. Before discussing our update plan regarding the baselines, we first would like to emphasize the difficulty in making our main baseline SQAIR work. To researchers working on this problem, it is well-known that SQAIR is very unstable and difficult to train even for a few objects. Additionally, it is almost impossible to train it beyond a few objects. As evidence, in the SQAIR paper, the authors only evaluate up to 2 objects for MNIST and 3 objects for the DukeMTMC dataset, although it is obviously more interesting to test beyond this trivial number of objects. Moreover, there have been a few methods following the SQAIR framework [1, 2, 3], but none of them has ever reported any results beyond a few objects. Further, we have thus far assigned the task of reproducing SQAIR to 8 students, and none of them have ever succeeded in making it work beyond these \\u201cfew objects\\u201d settings. Finally, via personal communication, the SPAIR authors also confirmed that SQAIR does not work beyond a few objects for them either. Considering all these observations, it seems fairly reasonable to conclude that SQAIR is extremely difficult or almost impossible to scale beyond a few objects. In our paper, we also provide a reason why SQAIR may suffer from this scalability problem, which can also be considered as a minor contribution. \\n\\nNevertheless, we agree that it is reasonable to provide a comparison to SQAIR for settings where SQAIR can work, namely scenes with up to 3~4 objects. In our revision, we will add this comparison. Also, an additional computational efficiency comparison over SQAIR and the proposed method will also be included in the experiment section.\\n\\nFurther, for all density settings, including those where SQAIR totally fails, we will also provide a comparison to VRNN in terms of the generation/reconstruction quality. Regarding this, we would also like to note that the tracking is not the sole purpose of generative models such as SQAIR and SCALOR. The rendering performance is also an important factor. The purpose of this experiment is to show that our model can learn an object-oriented sequential representation without impeding the generation quality. Note that in general, introducing such a discrete (object-oriented) representation comes with some performance degradation because it limits the model space and optimization, compared to continuous representations. Thus, our goal is not to be significantly better than VRNN. Rather, achieving a comparable level of performance ought to be a sufficient achievement, given that our model learns useful structured representations.\\n\\n[1] Stani\\u0107, A., & Schmidhuber, J. (2019). R-SQAIR: Relational Sequential Attend, Infer, Repeat. arXiv preprint arXiv:1910.05231.\\n\\n[2] Kossen, J., Stelzner, K., Hussing, M., Voelcker, C., & Kersting, K. (2019). Structured Object-Aware Physics Prediction for Video Modeling and Planning. arXiv preprint arXiv:1910.02425.\\n\\n[3] Akhundov, A., Soelch, M., Bayer, J., & van der Smagt, P. (2019). Variational Tracking and Prediction with Generative Disentangled State-Space Models. arXiv preprint arXiv:1910.06205.\"}",
"{\"title\": \"Response to Blind Review #1 (3/3)\", \"comment\": \"** Comparison to traditional tracking methods and is SCALOR (and SQAIR) a tracking model?\\n\\nOther similar works such as SQAIR do not conduct such a comparison because existing tracking methods are either supervised, not probabilistic, or cannot learn to render. Our contribution is not in the space of supervised tracking or non-generative modeling. For these reasons, we believe that SQAIR and VRNN (for generation quality) should be the proper baselines to compare to. However, we will make sure to better acknowledge previous work on tracking.\\n\\n** How do these more difficult settings (in experiments 2-4 of section 5.1) compare to the \\u201cdefault\\u201d one. \\n\\nThis is a good point. Thanks for pointing this out. We will provide a comparison to our default settings.\\n\\n** In sec 5.2., given that the ground truth tracks are not available, evaluating tracking is challenging, but one could still compare NLL/recon with appropriate baselines.\\n\\nWe will provide a comparison to VRNN in our revision. Note that SQAIR cannot be used because it does not work for that number of objects and is unable to cope with the background rendering. Hence, it seems that the best one can do is to compare NLL/recon to VRNN. We hope that the reviewer understands the difficulty of this research due to the total failure of the baseline in high-density settings.\\n\\n** Why compare to VAE & why our NLL is not better than VAE.\\n\\nThe goal of the comparison is to show that our model achieves its main goal (learning object-oriented representations) without losing the generation quality. Thus, VAE, which does not need to model temporal information and object-level representations is the appropriate baseline to evaluate the generation quality. Regarding the fact that the NLL of SCALOR is not better than that of VAE, it is actually a common misconception to expect a better generation quality for a discrete representation learning model like ours. Although a discrete structure in neural networks provides many advantages such as interpretability and compositionality, it generally comes with some performance degradation because it significantly limits the model space and optimization performance, compared to continuous representations. So, it is noteworthy to devise a model that uses the power of discrete latent representations while still having generation quality comparable to continuous models.\\n\\n** Supplementary Video\\n\\n> We have made a project website where you can find videos. Link: https://sites.google.com/view/scalor/home\\n\\n** Minor Comments\\n\\n> Thanks for the comments. All of these comments make sense, and we will incorporate all of them in our revision.\"}",
"{\"title\": \"Response to Blind Review #3\", \"comment\": \"Thank you for the suggestion. Yes, in the revision, we will add two additional experiments in a \\u201cVery Low Density (VLD)\\u201d setting, containing up to 4 objects, in which SQAIR works properly, and several different metrics to compare our method to SQAIR quantitatively. As we show quantitatively, our method can outperform SQAIR even in those settings, leading to more accurate and consistent bounding boxes.\"}",
"{\"title\": \"For All Reviewers\", \"comment\": \"We thank all the reviewers for taking the time to read our paper and provide insightful feedback and suggestions. We will upload a new version of the paper. The main focus of the revision is to provide quantitative comparisons to baselines, which was the main concern about the paper. For this, we performed a significant amount of additional experiments to provide the following quantitative evaluation:\\n\\n1) We add two additional \\u201cVery Low Density\\u201d experiment settings for Moving MNIST and Moving dSprites and introduce SQAIR as the baseline for comparison. \\n2) We also introduce VRNN as a baseline for all experimental settings to compare the reconstruction quality\\n3) An ablation study on the proposal-and-rejection mechanism is added to the experiment section.\\n4) We add an additional comparison experiment between SCALOR and SQAIR with respect to computational efficiency. Inference latency and training convergence time are used as metrics.\\n\\nWe also make the following general updates\\n\\n5) To better illustrate the proposed architecture, we add an architecture diagram for the overall structure on the main text and also append pseudo-codes for each component in the appendix to explain each part in detail.\", \"we_have_also_created_a_project_webpage_with_additional_qualitative_examples_and_video_of_scalor\": \"https://sites.google.com/view/scalor/home\\n\\nWe will respond to each reviewer\\u2019s points in detail in the comments below. We believe we have addressed each reviewer\\u2019s concerns and look forward to hearing feedback about the updated version of our paper. We hope the reviewers can take our responses and revisions into consideration when evaluating our final score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a generative model for scalable sequential object-oriented representation. The paper proposes several improvements based on the method SQAIR (Kosiorek et al. 2018b), (1) modeling the background and foreground dynamics separately; (2) parallelizing the propagation-discovery process by introducing the propose-reject model which reducing the time complexity. Finally, the proposed model can deal with orders of magnitude more objects than previous methods, and can model more complex scenes with complex background.\\n\\nAccept.\\nThe paper is clearly written and the experimental results are well organized. The results in the paper may be useful for unsupervised multi-objects tracking .I have one concern here, \\n\\uff081\\uff09As argued in the paper, previous methods are difficult to deal with the nearly a hundred objects situation and there is no direct comparison for these methods. So has the author compared the method SCALOR with previous methods in few objects setting? Does the technical improvements of the method benefit in the few objects setting?\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I thank the authors for the detailed rebuttal, as well as for the updates to the text and several new experiments in the revised version of the paper. Most of my comments are addressed well. I am happy to improve my rating and recommend to accept the paper.\\n\\n---\\n\\nThe paper proposes an approach for unsupervised detection and tracking of objects in videos. The method continues the \\\"Attend, Infer, Repear\\\" (AIR) and Sequential AIR (SQAIR) line of work, but improves on these previous approaches in terms of scalability and can thus be applied to scenes with tens of objects. The scalability is obtained by replacing, wherever possible, sequential processing of objects by parallel processing. Experiments are performed on three datasets: moving DSprites, moving MNIST, and the real-world \\\"Crowded Grand Central Station\\\" dataset. The method seems to work in all cases, as confirmed by quantitative evaluation on the first two datasets and a qualitative evaluation on all three.\\n\\nI recommend rejecting the paper in its current state. On one hand, the results look quite good, and the method seems to indeed scale well to many objects. On the other hand, novelty is limited and the experiments are limited in that there are no comparisons with relevant baselines and no clear experiments showing the specific impact of the architectural modifications proposed in this paper. Moreover, the paper is over 9 pages, which, as far as I remember, requires the reviewers apply \\\"higher standards\\\". Overall, I would like the authors to comment on my concerns (below) and may reconsider my rating after that.\", \"pros\": \"1) Relatively clear presentation.\\n2) Judging from the extensive qualitative results and the (limited) quantitative evaluation, the method seems to work.\\n3) I appreciate the additoinal results on generalization and parallel disovery in the appendix.\", \"cons\": \"1) Novelty of the work seems quite limited. The main contribution is in improving the efficiency of object detection/tracking by parallelizing the computation, without much conceptual innovation. This might be a sufficient contribution (in the end, efficiency is very important for actually applying methods in practice), but then a thorough evaluation of the method would be expected (see further comments about it further). Moreover, the previously published SPAIR method by Crawford and Pineau seems very relevant and related, but is not compared against and is only briefly commented upon, despite the code for that method seems to be available online. I would like the authors to clarify the relation to SPAIR and preferably provide an experimental comparison.\\n\\n2) The experiments are restricted. While there are quite many qualitative results, several issues remain:\\n2a) No baseline results are reported. It is thus impossible to judge if the method indeed improves upon related prior works. In particular, comparisons to SQAIR and SPAIR would be very useful. If possible, it would be useful to provide even more baselines, for instance some traditional tracking methods. Comparisons can be both in terms of tracking/reconstruction performanc, as well as in terms of computational efficiency. Both can be measured as functions of the number of objects in the scene.\\n2b) There are few quantitative resutls. In experiments 2-4 of section 5.1 it seems it would be fairly easy to introduce some, in particular, one could compare how do these more difficult settings compare to the \\\"default\\\" one. In section 5.2 given that the ground truth tracks are not available, evaluating tracking is challenging, but one could still compare NLL/reconstruction with appropriate baselines. The provided comparison to a vanilla VAE in therms of NLL is actually somewhat confusing - not sure what it tells the reader; moreover, I would actually expect the NLL of the proposed structured model to be better. Why is it not?\\n2c) Since the paper is largely about tracking objects through videos, it would be very usefuly to include a supplementary video with qualitative results. \\n\\n3) (minor) Some issues with the presentation:\\n3a) I found the method description at times confusing and incomplete. For instance, it is quite unclear which exactly architectures are used for different components. The details can perhaps bee looked up in the SQAIR paper, but it would still be useful to summarize most crucial points in the supplementary material to make the paper more self-contained.\\n3b) The use of \\\\citet vs \\\\citep is often incorrect. \\\\citet should be used when the author names are a part of the text, while \\\\ciptp if the paper is cited in passing, for instance: \\\"Smith et al. (2010) have shown that snow is white.\\\" vs \\\"It is known that snow is white (Smith et al. 2010).\\\" \\n3c) Calling AIR a \\\"seminal work in the field of object detection\\\" is not quite correct - object detection is a well-established task in computer vision, and AIR is not really considered a seminal work in that field. It is a great paper, but not really in object detection.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"UPDATE: My original main concern was the lack of baseline, but during the rebuttal period the authors have conducted the request comparison and addressed my questions satisfactorily. Therefore, I would recommend the paper be accepted.\\n\\n---\", \"summary\": \"This paper proposes a generative model and inference algorithm for discovering and propagating object latents in a way that scales to hundreds of objects. The key components of their approach is the parallel discovery and propagation of object latents as well as the explicit modeling of the background. The authors show that the model is able to model and generalize to scenes with hundreds of objects, including a real-world scene of a train station.\", \"research_problem\": \"This paper tackles the problem of scaling object-oriented generative modeling of scenes to scenes with a large number of objects.\\n\\nThe main weakness of the submission is the lack of a baseline, and mainly for this result I would recommend rejecting the submission at its current state. However, if the authors are able to revise the submission to include such comparisons (with Kosiorek et al. (2018), van Steenkiste et al. (2018), and Alahi et al. (2016), detailed below), then I would highly consider accepting the paper, as the paper makes a novel contribution to modeling scenes with many more objects than previous work, as far as I am aware.\", \"strengths\": [\"The authors show that the method can model various synthetic and real-world datasets that show the efficacy of the method\", \"The method can also generalize to more objects and longer timesteps than trained on.\"], \"weaknesses\": [\"The main weakness of the submission is the lack of a baseline. It would be important to understand the differences between Kosiorek et al. (2018), which the authors claim is the closest work to theirs, and van Steenkiste et al (2018), which also models objects in a parallel fashion. Alah et al. (2016) also takes an approach of dividing the scene into grid cells and also demonstrate results on modeling human trajectories.\", \"Motivation: Whereas the authors motivate the benefits for modeling objects, the motivation for specifically scaling to model hundreds of objects is less clear. It would be helpful for the authors to provide examples or arguments for the benefits of modeling so many objects at once. One argument against such a need is that humans only pay attention to a few number of objects at a time and do not explicitly model every possible object in parallel. One argument in favor of such a need is the ability to gain superhuman performance on tasks that could benefit from modeling multiple entities, such as playing Starcraft, detecting anamolies in medical scans, or modeling large scale weather patterns.\", \"A possible limitation of the method may be in modeling interactions between entities. What mechanism in the propagation step allows for modeling such interactions, and if not, how could such a mechanism be incorporated?\", \"How would SCALOR behave if the grid cells were smaller than the objects? In this case an object may occupy multiple grid cells. Would the authors provide an experiment analyzing this case? Would SCALOR model a object as multiple entities in this case (because the object spans multiple grid cells), or would SCALOR model the object with a single latent variable?\"], \"question\": \"- What is the fundamental reason for why the structure of such a generative model would cause the latents to model objects, rather than something else, such as the image patches that show the empty space between objects?\\n\\nAlahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L., & Savarese, S. (2016). Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 961-971).\\n\\nVan Steenkiste, S., Chang, M., Greff, K., & Schmidhuber, J. (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353.\\n\\nKosiorek, A., Bewley, A., & Posner, I. (2017). Hierarchical attentive recurrent tracking. In Advances in Neural Information Processing Systems (pp. 3053-3061).\"}"
]
} |
Hye4KeSYDr | Evaluations and Methods for Explanation through Robustness Analysis | [
"Cheng-Yu Hsieh",
"Chih-Kuan Yeh",
"Xuanqing Liu",
"Pradeep Ravikumar",
"Seungyeon Kim",
"Sanjiv Kumar",
"Cho-Jui Hsieh"
] | Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively. | [
"Interpretability",
"Explanations",
"Adversarial Robustness"
] | Reject | https://openreview.net/pdf?id=Hye4KeSYDr | https://openreview.net/forum?id=Hye4KeSYDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"g6oIocKslj",
"H1xHFDK_ir",
"BkeOnUYdjr",
"SkeCBrK_jB",
"ryejC4tdiS",
"Hke7Y4KuiH",
"rkeXJZY_or",
"ryxIu2xOqS",
"B1l10XwCtS",
"H1xoUkao_H",
"BJxBpOUodr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1576798748996,
1573586812998,
1573586607668,
1573586245551,
1573586130824,
1573586042926,
1573585115148,
1572502637759,
1571873735130,
1570651987246,
1570625725242
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2432/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2432/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2432/Authors"
],
[
"~TING_TING_SUN1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes an approach for finding an explainable subset of features by choosing features that simultaneously are: most important for the prediction task, and robust against adversarial perturbation. The paper provides quantitative and qualitative evidence that the proposed method works.\\n\\nThe paper had two reviews (both borderline), and the while the authors responded enthusiastically, the reviewers did not further engage during the discussion period.\\n\\nThe paper has a promising idea, but the presentation and execution in its current form have been found to be not convincing by the reviewers. Unfortunately, the submission as it stands is not yet suitable for ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #4 (Cont.)\", \"comment\": \"\", \"q6\": \"The Reg-Greedy algorithm is a major contribution of this paper, but receives very little explanation. Indeed, perhaps the clearest quantitative statement of the paper is that Reg-Greedy beats Greedy. Is this a common method for optimising w.r.t. a subset? Is it similar to other methods?\", \"a6\": \"Thank you for the suggestion. We agree that Reg-Greedy is indeed a very interesting approach for optimizing w.r.t. a subset. To the best of our knowledge, we have not noticed this method being used in the literature, but we can note the following connections to several different families of methods. First of all, as discussed in the paper, one main motivation of using regression over the greedy approach is that pure greedy only considers individual influence of a single feature on the objective function separately for each feature. This short-term, or one-step look ahead, can actually be very noisy and thus taking such greedy path along the way may not benefit the long-term optimization goal. By ignoring the interaction between features, greedy can easily fail to capture the set of features that together might have the greatest influence on the objective function.\\nIn fact, the greedy approach can be viewed as a special case of Reg-Greedy where the sampled subset $Q$ (in Eq. 6) in each iterative step contains exactly the one-hot encoded vectors with the \\\"on\\\" indices correspond to the remaining feature indices. That is, each one-hot vector indicates the inclusion of a corresponding single feature into the relevant set. In this case, the coefficients of the learned linear regression would be equivalent to the difference in objective value before and after the corresponding feature is included into the relevant set. To take into account feature interactions, Reg-Greedy samples from the whole distribution of $\\\\{0, 1\\\\}^d$ where most of the sampled vectors in $Q$ contains multiple \\\"on\\\" indices. In this way, the learned regression captures feature correlations on the objective value and could smooth out possible noises encountered by greedy. In fact, there has been a great line of research on studying the interaction between features including the well-known Shapley value which tackles the problem through cooperative game theory perspective. And (Lundberg and Lee, NIPS 2017) proposed a way to use regression with a special kernel to approximate the Shapley value. However, sampling from the whole distribution of $\\\\{0, 1\\\\}^d $ could still incur exponential complexity, and using only a reasonable amount of samples might not be able to precisely capture the behavior of the highly non-linear objective function. As a result, by combining regression in a greedy procedure, we are able to gradually narrow down our sampling space (by sampling only from a restricted domain), focusing on the feature interactions between remaining features and the ones that are already added into the relevant set. This enables us to find from the remaining features that have the greatest interaction with the current relevant set, and could in turn maximally optimize the objective value when added into the relevant set. The iterative approach thus gives the main advantage of Reg-Greedy over one-step regression. We have enriched our discussion on Reg-greedy in the revision.\", \"q7\": \"Overall polishing of the paper\", \"a7\": \"We have revised the typos and incorporated your suggestions on paper writing to make the paper neater and easier to read. We will conduct another round of paper polishing in the revision.\"}",
"{\"title\": \"Response to Reviewer #4\", \"comment\": \"Thank you for your careful review and your helpful comments. We have incorporated your suggestions into our revision and addressed the questions raised as follows.\", \"q1\": \"Quantitative experiments which seem to only test how well each method works in relation to the very metric which only your proposed method directly optimises - this is nice but not surprising.\", \"a1\": \"We agree that it is not too surprising the proposed method has the best performance on the criteria it is explicitly designed to optimize; though it serves as a sanity check of our method. To more objectively showcase the usefulness of the proposed method, we conduct an additional set of experiments comparing our method with other explanations on varied existing commonly adopted quantitative measurements in the literature. In particular, we adopt the Deletion and Insertion criteria (Petsiuk et al., BMVC'18) which are generalized variants of the region perturbation criterion (Samek et al., Trans NNLS'16). The deletion criterion measures the drop in the probability of a class as top-relevant features (given by the explanation) are progressively removed from the input. Similar to our proposed criterion Robustness-$S_r$, a quick drop, and thus a small area under the curve, suggests a good explanation as the \\\"selected\\\" relevant features indeed greatly influence the prediction. On the other hand, the insertion criterion measures the increase in the probability of a class as top-relevant features are gradually introduced into the input. In the experiments, we follow (Samek et al., Trans NNLS'16) to remove features by setting their values to randomly sampled values. We plot the evaluation curves and report corresponding AUCs in Appendix B. On these additional two criteria, we observe that our proposed method consistently performs favorably against other explanations. The t-test results in Table 7 (Appendix C) also indicate such outperformance is indeed significant.\", \"q2\": \"Also, the way you define the AUC of your measure seems a little strange.\", \"a2\": \"Our definition of AUC simply measures the area under evaluation curves as the ones shown in Figure 1. Since for all explanations, we evaluate their Robustness-$\\\\overline{S_r}$/Robustness-$S_r$ on the same set of varying sizes of relevant set (from 0% to 45% of the total number of features), the AUC is in fact proportional to the average Robustness-$\\\\overline{S_r}$/Robustness-$S_r$ evaluated at different points along the x-axis. The AUC reflects the overall explanation quality by assuming different amount of underlying true relevant features. We have clarified the definition of AUC in the revision.\", \"q3\": \"Your curves in appendix A don't seem to have the same number of points for each method in all cases?\", \"a3\": \"For the figures in Appendix A, the reason why some curves seem to have less points than others is because we omit the points whose value is too high to fit into the scale of y-axis in the plot. We have clarified this point in the revision.\", \"q4\": \"I don't know why different baselines appear in different comparisons as in e.g. tables 1 and 2. It seems as though not all baselines are included in all examples (even figures 3 and 5 (figure 6 in current revision), which are analogous, include different baselines).\", \"a4\": \"In Table 2, we originally omit the results for SHAP on ImageNet since it has known to be computationally prohibiting on high-dimensional images. In the revision, we have included its results on ImageNet by implementing 8 by 8 super-pixel to reduce the dimension of feature space. We have included both of its quantitative and qualitative results in Table 2 and Figure 6 respectively. In addition, for Figure 3 and Figure 6, we have included consistent set of baselines for the qualitative results.\", \"q5\": \"Qualitative examples with images and text. These are nice, but alas only qualitative.\", \"a5\": \"We agree that qualitative results cannot serve as the only measurement to evaluate explanations. As a result, we use different quantitative criteria to complement the findings from the qualitative visualizations, and hope both objective and subjective results together could bring more insights into different explanation strategies.\"}",
"{\"title\": \"Response to Reviewer #1 (Cont.)\", \"comment\": \"\", \"q6\": \"Could you provide any indication on whether the proposed method passes these sanity checks?\", \"a6\": \"To ensure that our proposed explanation does indeed reflect the model behavior, we conduct the sanity check proposed by (Adebayo et al., NeurIPS'18) to check if our explanations look different when the model parameters being explained are randomly re-initialized. In the experiment, we randomly re-initialize the last fully-connected layer of the neural network. We then compute the rank correlation between explanation computed w.r.t. the original model and that w.r.t. the randomized model. From Table 8 in Appendix D, we observe that our method has a much lower rank correlation comparing to Grad, IG, and LOO, suggesting that our method is indeed sensitive to model parameter change and is able to pass the sanity check.\", \"q7\": \"A similar capability (finding both crucial positive pixels as well as pertinent negative pixels) has also being reported earlier in Samek et al., Trans NNLS'16 and Oramas et al. ICLR'19. Since the visualization analysis (discussed in pag.6) focuses exclusively on this capability. There should be a comparison between the proposed method and the two mentioned works.\", \"a7\": \"In Samek et al., they showed that Layer-wise Relevance Propagation (LRP) (Bach et al., PLOS ONE) has the capability of capturing both crucial positive pixels as well as pertinent negative pixels. This capability of LRP in fact depends on the input range where inputs are normalized to have zero mean and a standard deviation of one in Samek et al. In this case, the black background will have non-zero value, and LRP would have non-zero attributions on the black background pixels which allows the explanation to capture pertinent negative features. However, as later on shown in Dhurandhar et al. (NeurIPS'18), if the input scale is in the range of [0, 1] (where background pixels have the values of 0), LRP failed to highlight the pertinent negative pixels, as background would always have the zero attribution (since LRP is equivalent to Grad * Input in a ReLu network as shown in Ancona et al., ICLR'18).\\nIn Oramas et al. ICLR'19, a similar capability of highlighting the pertinent negative features has also been reported. Although their explanation method could capture both pertinent positive as well as pertinent negative that supports the prediction, their method is not explicitly designed for answering the question of \\\"what are the important features that leads to the prediction of A but not B''. In other words, unlike our method where we can specify different B and observe different explanations given, their method cannot handle such requests by design. Users in fact need to infer what target class the pertinent negative features are suggesting against. Finally, we compare our explanation with theirs in Appendix E Figure 13 (we borrow the results from Oramas et al. for visualization of their method). Qualitatively, we also observe that our method seems to be giving the most natural explanations. For example, in the first row of left image where the highlighted features are against the class 0, in addition to the left vertical gap (which when presence would make 2 looks like a 0) that is roughly highlighted by all three methods, our method is the only one that highlights the right tail part (green circled) of the digit 2 which might also serve as crucial evidence of 2 against 0. Furthermore, as we change the targeted class to 7 (the second row), while LRP seems to be providing similar explanations, we observe that our explanation has a drastic change and highlights the green circled part which when turned off will make 2 becomes a 7. These results might suggest our method is more capable of handling such targeted explanation task.\"}",
"{\"title\": \"Response to Reviewer #1 (Cont.)\", \"comment\": \"\", \"q3\": \"Given the comparable performance achieved by GRAD and its relative simplicity, it would be hard to motivate why not choose GRAD instead of the proposed method? Could you provide some discussion on this?\", \"a3\": \"While we do observe that Grad has somewhat competitive performance on the proposed criteria, it has also been known that Grad has different shortcomings as well. One such caveat is its notorious \\\"saturation\\\" problem. For example, consider a binary classification model that takes two-variable input $f(x_1, x_2) = sign(1 - ReLU(1 - x_1) - 0.5)$ (inspired by Sundararajan et al., ICML'17). Naturally, one would regard $x_1$ as relevant and $x_2$ as irrelevant since the classification result in fact only depends on $x_1$. However, for a given input $(x_1, x_2) = (2, 5)$, the explanation (attribution score) provided by Grad will be $(0, 0)$ (as the function becomes flat at $x_1 = 1$), which fails to distinguish between relevant and irrelevant features. In this case, our proposed method could still distinguish between the relevance level of $x_1$ and $x_2$, as the Robustness-$S_r$ will be $1.5$ if $S_r = \\\\{x_1\\\\}$; and infinity if $S_r = \\\\{x_2\\\\}$. The main difference (and perhaps also advantage) of our method over Grad is that our method explains the model behavior from a more global viewpoint, as opposed to Grad which only consider the function sensitivity to each individual feature locally.\", \"q4\": \"It is stated the the proposed regression-greedy method outperforms other methods in these criteria. In my opinion this trend shouldn't be surprising given the fact that the proposed method is specifically optimized on such criteria as it is clearly stated by the title of Sec.3.\", \"a4\": \"We agree that it is not too surprising the proposed method has the best performance on the criteria it is explicitly designed to optimize; though it serves as a sanity check of our method. To more comprehensively showcase the usefulness of the proposed method, we conduct an additional set of experiments comparing our method with other explanations on various existing commonly adopted quantitative measurements in the literature. In particular, we adopt the Deletion and Insertion criteria (Petsiuk et al., BMVC'18) which are generalized variants of the region perturbation criterion (Samek et al., Trans NNLS'16). The deletion criterion measures the drop in the probability of a class as top-relevant features (given by the explanation) are progressively removed from the input. Similar to our proposed criterion Robustness-$S_r$, a quick drop, and thus a small area under the curve, suggests a good explanation as the \\\"selected\\\" relevant features indeed greatly influence the prediction. On the other hand, the insertion criterion measures the increase in the probability of a class as top-relevant features are gradually introduced into the input. In the experiments, we follow (Samek et al., Trans NNLS'16) to remove features by setting their values to randomly sampled values. We plot the evaluation curves and report corresponding AUCs in Appendix B. On these additional two criteria, we observe that our proposed method consistently performs favorably against other explanations. The t-test results in Table 7 (Appendix C) also indicate such outperformance is indeed significant.\", \"q5\": \"Perhaps it would be more informative to have a heatmap highlighting/grading the entire input space.\", \"a5\": \"In the revision, we have included more visualization results with heatmaps indicating the relative importance of different features in Appendix F.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your careful review and your helpful comments. We have incorporated your suggestions into our revision. We address the questions raised in the response below.\", \"q1\": \"The amount of relevant/irrelevant features is unknown beforehand. In that case the proposed AUC-based method seems more adequate. Could you comment on this? Could you indicate how this size K is defined in practice? Is there a principled way to define it? What is the effect of this parameter on the performance of the proposed method?\", \"a1\": \"One general goal of feature-based explanations is to extract a \\\"compact\\\" set of relevant features for a given model prediction, since the most straightforward yet vacuous explanation is simply highlighting all features as relevant (which does not constitute a meaningful explanation). However, because the number of true relevant features is in general unknown beforehand (as Reviewer #1 notes), the predominant approach recent papers have considered is to output the top-K important features, for varying values for K. For example, in attribution methods such as Grad and IG, we could take the top-K features with the highest attribution scores. And K is usually set to varying values so that we generate relevant feature set explanations of different sizes. Similarly, in our proposed method, we allow users to set the value of K such that our explanation could identify the top-K most important features to the prediction. In our experiments, we vary the value of K such that our explanations provide sets of relevant features of sizes 5%, 10%, ..., 50% of the total number of features. Then, for each of these relevant sets with differing sizes, we could apply the proposed evaluation criteria to evaluate their quality, which yields a single evaluation curve shown in Figure 1.\\n\\nSuch evaluation curves measure the quality of an explanation by considering differing sizes of relevant features, and the AUC then reflects the overall quality of the explanation. In the case where users have no knowledge about the number of relevant features, our paper thus suggests the use of AUC of the evaluation curve, which as the reviewer notes is indeed more than adequate as an evaluation. But to also provide a rationale for evaluations of differing values of K: it provides the quality of the relevant sets along multiple points on the evaluation curve, instead of a single numerical summary. And in some special use cases, the users might indeed be interested in a pre-defined size of relevant set, e.g. top-20% relevant features. But as the reviewer suggests, in our paper, we do recommend the use of AUC, which we also use to compare across different explanations, in addition to plotting the whole evaluation curves to illustrate the performances of different explanations at various sizes of relevant set.\", \"q2\": \"When comparing the performance across the different methods, are these random 50 examples fixed or always re-sampled? Also, given the size of the considered datasets, where the number of images in their test sets is in the order of the thousands, it is hard to grasp how representative are the reported results?\", \"a2\": \"In the revision, we have increased the number of testing examples from 50 to 100. In the experiments, we first randomly sample 100 examples from the entire test set. We then compare different explanation methods on the same set of 100 examples. We find that the relative performance of explanations is not sensitive to the number of testing examples being changed from 50 to 100. To further justify that the proposed method indeed enjoys a better performance on different criteria (our proposed criteria and other existing criteria which we shall introduce later), we conduct pairwise Student's t-test comparing the performances of our proposed method and other existing explanations. As shown in Appendix C, our method does enjoy a statistically significantly better performance over other explanations across various criteria.\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all reviewers for their constructive and helpful comments. We have incorporated their suggestions into our revision, and have hopefully addressed all of their questions raised. In brief: (a) we conduct an additional set of experiments evaluating our proposed method via a suite of existing commonly adopted quantitative measurements, (b) we ran a Student's t-test to positively verify the statistical significance of the performance improvements of our proposed method over existing explanations on various criteria, (c) we conducted a sanity check based on model parameter randomization and verified that our proposed explanation does indeed pass the test, and finally, (d) we significantly enriched the comparisons, discussions, and make clarifications in our manuscript.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"SUMMARY\\n\\nThe authors propose an intuitive new measure (definition 2.1) of feature importance based on a robustness criterion (with two variants of equations 3 and 4). Two optimisers are proposed for finding the most important features according to this measure, in order to explain why a classifier is making a certain prediction.\\n\\nThe experiments are both qualitative (showing pixel-wise importance maps for image problems) and quantitative (showing how well the various feature importance scoring algorithms do albeit on the measure explicitly optimised by the new proposed algorithms).\\n\\nCOMMENTS\\n\\nThe paper organisation is clear enough though the English needs a little work to make it read really nicely.\\n\\nThe proposed measures are natural and intuitive. Especially the Robustness-$\\\\hat{S_r}$ is an interesting twist on the more obvious Robustness-$S_r$.\\n\\nWhile the measures are interesting, the justification for them is somewhat weak. This amounts to \\n\\n1) Quantitative experiments which seem to only test how well each method works in relation to the very metric which only your proposed method directly optimises - this is nice but not surprising. Also, the way you define the AUC of your measure seems a little strange. I don't know why different baselines appear in different comparisons as in e.g. tables 1 and 2. Finally your curves in appendix A don't seem to have the same number of points for each method in all cases? \\n\\n2) Qualitative examples with images and text. These are nice, but alas only qualitative. Also, it seems as though not all baselines are included in all examples (even figures 3 and 5, which are analogous, include different baselines). \\n\\nIt seems like the paper needs either 1) a quantitative evaluation that is not subjective. Surely, the evaluation metric should not match the (novel) objective of the proposed method? Or, 2) a theoretical result in support of the new measures.\\n\\nThe Reg-Greedy algorithm is a major contribution of this paper, but receives very little explanation. Indeed, perhaps the clearest quantitative statement of the paper is that Reg-Greedy beats Greedy. Is this a common method for optimising w.r.t. a subset? Is it similar to other methods? I felt that Reg-Greedy is a really nice idea but the paper did not do it justice.\\n\\nDETAILS\\n\\nPerhaps unifying (3) and (4) by defining a single g that subsumes both cases would be neater.\\n\\nPlease define g in equation (1) rather than in words after equation (4).\\n\\nIt should be S_r in the subscripts of (3) and (4)\\n\\n\\\"Crutial\\\" spelling\\n\\nFINALLY\\n\\nI'm open to be swayed on any of the above points, pending the author feedback.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The manuscript proposes a method for model explanation and two metrics for the evaluation of methods for model explanation based on robustness analysis. More specifically, two complementary, yet very related, criteria are proposed: i) robustness to perturbations on irrelevant features and ii) robustness to perturbations in relevant features. Moreover, different from existing works which defined the perturbation values following different somewhat-fixed procedures, the proposed method aims at allowing perturbations in any directions.\\n\\nA greedy algorithm optimizing these criteria is proposed in order to produce a method able to highlight important features of the input and justify/explain model predictions.\\nIn addition, the proposed robustness criteria are used as metrics to assess the performance of methods for model explanation.\\n\\nExperiments on models addressing image classification and text classification, shows the performance of the proposed method w.r.t. existing work.\\n\\nThe manuscript has a good flow, and its content is easy to follow. The proposed method is sound, well motivated, and very well founded. The formal presentation of the proposed method is good. I appreciate the fact that evaluation covers different modalities of data, i.e. text and images.\", \"my_main_criticism_over_the_manuscript_is_the_following\": \"In Sec. 3.3, when describing the first criterion, it is stated that the size |S_r|, i.e, the amount of anchors, could be defined by the used. In my opinion, this may not be applicable since in theory the amount of relevant/irrelevant features is unknown before hand. In that case the proposed AUC-based method seems more adequate. Could you comment on this?\\n\\n\\nIn Sec. 3, a pre-defined size K is introduced. Later in Sec. 3.1 it is stated that the greedy algorithm uses this size as a stopping criterion for the optimization of the proposed robustness criteria. Could you indicate how this size K is defined in practice? Is there a principled way to define it? What is the effect of this parameter on the performance of the proposed method? An ablation study focused on this parameter would provide further insights into the inner workings of the proposed method and would improve the manuscript.\\n\\n\\nIn Sec.4 it is stated that only 50 random examples are considered when reporting results. When comparing the performance across the different methods, are these random 50 examples fixed or always re-sampled? Also, given the size of the considered datasets, where the number of images in their test sets is in the order of the thousands, it is hard to grasp how representative are the reported results?\\n\\nIn the same paragraph discussed above, it is mentioned that the GRAD method performs competitively on the proposed criteria. It might be interesting to further positioning the proposed method w.r.t. GRAD. Given the comparable performance achieved by GRAD and its relative simplicity, it would be hard to motivate why not choose GRAD instead of the proposed method? Could you provide some discussion on this?\\n\\nIn Sec.4 (pag. 6) it is stated the the proposed regression-greedy method outperforms other methods in these criteria. In my opinion this trend shouldn't be surprising given the fact that the proposed method is specifically optimized on such criteria as it is clearly stated by the title of Sec.3.\\n\\nFig.3 and Fig.4 display binary images indicating the top-n features selected by different methods. Perhaps it would be more informative to have a heatmap highlighting/grading the entire input space. This may throw more light on the performance of the compared methods.\\n\\nIn Adebayo et al., NIPS'18 (and very related efforts), there are presented a set of sanity checks to be applied to explanation methods to ensure their predictions are relate to the class and model being predicted. Could you provide any indication on whether the proposed method passes these checks?\\n\\nIn Sec.4 (visualization) it is stated that the proposed method effectively highlights crucial positive pixels as well as pertinent negative pixels. A similar capability has also being reported earlier in Samek et al., Trans NNLS'16 and Oramas et al. ICLR'19. Since the visualization analysis (discussed in pag.6) focuses exclusively on this capability. There should be a comparison between the proposed method and the two mentioned works.\"}",
"{\"comment\": \"Hi,\\n\\nThank you for your comment.\\n\\nWe note that the assumption \\\"the model could tolerate a larger degree of perturbation on the less important and non-anchored features\\\" is analogous to the common assumption \\\"the model prediction does not change much when the less important features are removed\\\", which is adopted in several popular existing explanation evaluations. However, as we discuss in the paper (section 1 and section 5), the notion of feature removal is generally difficult to model and is often implemented by setting the feature value to zero or some random value in practice. Such removal practice inevitably is prone to introduce bias into the evaluation process. As a result, we instead consider the idea of prediction robustness as allowing perturbation is a much more general notion that does not introduce extra bias but with similar underlying meaning.\\n\\nSpecifically, the assumptions \\\"the model prediction does not change much when the less important features are removed\\\" and \\\"the model prediction change more when the more important features are removed\\\" are considered by SSR- and SDR-based evaluations [1], which later on became commonly adopted in the literature [2, 3]. Moreover, such assumptions also lie in fidelity-based attribution evaluations implicitly, since fidelity-based evaluations assume that the feature importance should be related to the average performance drop of the model when the feature is removed. A previous work [4] has shown that Shapley value for explanation also optimizes such a fidelity measurement with a specific perturbation prior.\\n\\nTherefore, we believe that our assumption falls in line with the majority of explanation evaluations but additionally resolves the caveat of removing features which introduces bias to the evaluation process (and thus the explanation). We agree that robustness and importance are two concepts, but we believe that they are related.\\n\\nWe have read the suggested related work which considers more on the problem of sensitivity of explanations to adversarial inputs. While we believe it is not directly related to our work, we will surely consider enriching our related work section to further include this line of studies.\\n\\n[1] Evaluating the visualization of what a deep neural network has learned. Wojciech Samek, Alexander Binder, Gr\\u00e9goire Montavon, Sebastian Lapuschkin, and Klaus-Robert M\\u00fcller. IEEE transactions on neural networks and learning systems, 28(11):2660\\u20132673, 2016.\\n[2] RISE: Randomized Input Sampling for Explanation of Black-box Models. Vitali Petsiuk, Abir Das, Kate Saenko. BMVC 2018.\\n[3] Explaining image classifiers by counterfactual generation. Chun-Hao Chang, Elliot Creager, Andrew A. Goldenberg, and David Kristjanson Duvenaud. ICLR 2019.\\n[4] On the (in)fidelity and sensitivity for explanations. Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, and Pradeep Ravikumar. Arxiv 2019.\", \"title\": \"Our assumptions fall in line with existing evaluation measurements\"}",
"{\"comment\": \"Hi, this is a great work but I have some questions.\\n\\nIn this paper, you assume that \\\"the model could tolerate a larger degree of perturbation on the less important and non-anchored features\\\". I wonder is there any work supports this assumption? Since robustness and importance are two concepts. Some important features can be quite robust to perturbation. Maybe it is more reasonable to evaluate the importance of features based on the theory of the Shapley value.\\n\\nBesides, I think this paper may be related to your work:\\nZhang X , Wang N , Shen H , et al. Interpretable Deep Learning under Fire. 2018. arXiv:1812.00891\", \"title\": \"Some questions and a related paper\"}"
]
} |
B1gNKxrYPB | Attributed Graph Learning with 2-D Graph Convolution | [
"Qimai Li",
"Xiaotong Zhang",
"Han Liu",
"Xiao-Ming Wu"
] | Graph convolutional neural networks have demonstrated promising performance in attributed graph learning, thanks to the use of graph convolution that effectively combines graph structures and node features for learning node representations. However, one intrinsic limitation of the commonly adopted 1-D graph convolution is that it only exploits graph connectivity for feature smoothing, which may lead to inferior performance on sparse and noisy real-world attributed networks. To address this problem, we propose to explore relational information among node attributes to complement node relations for representation learning. In particular, we propose to use 2-D graph convolution to jointly model the two kinds of relations and develop a computationally efficient dimensionwise separable 2-D graph convolution (DSGC). Theoretically, we show that DSGC can reduce intra-class variance of node features on both the node dimension and the attribute dimension to facilitate learning. Empirically, we demonstrate that by incorporating attribute relations, DSGC achieves significant performance gain over state-of-the-art methods on node classification and clustering on several real-world attributed networks.
| [
"2-D Graph Convolution",
"Attributed Graph",
"Representation learning"
] | Reject | https://openreview.net/pdf?id=B1gNKxrYPB | https://openreview.net/forum?id=B1gNKxrYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ljBHX9sphT",
"HkgwEaqhsH",
"Syg5Ic52oS",
"B1gWE95njB",
"SylRjYchsS",
"H1lM6AJEqB",
"Hkg7Wc8QqS",
"B1l3QabUKS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748964,
1573854510586,
1573853777526,
1573853737323,
1573853606278,
1572236986147,
1572198907157,
1571327267942
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2431/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2431/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2431/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2431/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2431/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2431/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2431/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper studies the problem of graph learning with attributes, and propose a 2-D graph convolution that models the node relation graph and the attribute graph jointly. The paper proposes and efficient algorithm and models intra-class variation. Empirical performance on 20-NG, L-Cora, and Wiki show the promise of the approach.\\n\\nThe authors responded to the reviews by updating the paper, but the reviewers unfortunately did not further engage during the discussion period. Therefore it is unclear whether their concerns have been adequately addressed.\\n\\nOverall, there have been many strong submissions on graph neural networks at ICLR this year, and this submission as is currently stands does not quite make the threshold of acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your positive and constructive feedback.\\n\\nQ1. Comparison with works on learning graph structures such as LDS (https://arxiv.org/pdf/1903.11960.pdf).\\n\\n>> First of all, we would like to thank the reviewer for pointing out this interesting work. In the revised manuscript, we have included some discussion on this line of research in the 3rd paragraph of section 2.\\n\\nWe have also conducted experiments on LDS. Since LDS cannot scale to the size of the 20 Newsgroup dataset (out of GPU Memory) used in our experiments, we follow the authors to test on a 10-category subset of the 20 NG. We then test LDS on this subset of 20 NG, L-Cora, and Wiki. For classification on each dataset, LDS uses 20 labels per class for training and extra 20 labels per class for validation (the algorithm requires validation). Note that we do not use any validation data for the proposed DSGC method for classification. Due to the differences in datasets and experimental setup, we do not include the results of LDS in Table 1.\\n\\nInstead, we report the results of LDS in Table 2 to see whether the proposed DSGC can be used to improve LDS. We incorporate DSGC into LDS as described in section 5.2 by applying attribute graph convolution on the node features before training. The results in Table 2 show that DSGC significantly improves LDS on Newsgroup and Wiki and slightly improves LDS on L-Cora. We have also tested another case of LDS without using the given node affinity graphs of the three datasets and observed similar results. \\n\\nThe experiments show that DSGC can complement and improve LDS, just as it can complement and improve other SOTA methods based on the regular 1-D graph convolution such as GCN/GAT/GraphSAGE as shown in Table 2.\\n\\n\\nQ2. The paper is densely written.\\n\\n>> As the reviewer suggested, we have reorganized sections 3 and 4 to make them more compact in the revised manuscript. In section 3, we intend to show how the proposed 2-D graph convolution DSGC is derived, which follows a similar path of the development of 1-D GCN (from \\u201cspectral networks\\u201d to \\u201cChebyNet\\u201d to \\u201cGCN\\u201d). In section 4, we want to provide some insights into why DSGC works by analyzing the variance reduction effect of node graph convolution and attribute graph convolution respectively. \\n\\nQ3. The empirical results are mixed.\\n\\n>> We have improved the presentation of the experiments in the revised manuscript. We kindly ask the reviewer to read section 7 about the experiments again. Our results are statistically significant. For datasets with good node affinity graphs such as 20 Newsgroup and L-Cora, the proposed 2-D graph convolution DSGC (GXF) significantly outperforms most SOTA methods. For datasets with bad node affinity graphs such as Wiki, the proposed 2-D graph convolution DSGC (GXF) still outperforms most SOTA methods by a large margin but is less effective than DSGC (XF) (since the node affinity graph G is bad). DSGC can also be used to significantly improve SOTA methods including GCN, GAT, LDS and GraphSAGE. Please refer to section 7 in the manuscript for more detailed explanation.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your positive and helpful feedback. As you suggested, we have further emphasized in section 7.2 of the revised manuscript that connectivity such as the hyperlinks in Wiki is not necessarily helpful.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your positive and helpful comments.\\n\\nQ1. \\u201cIs it possible to firstly do the propagation over the attribute graph using a 1-layer GCN, followed by another 1-layer GCN over the node relation graph, similar to dense graph propagation module in \\u201cRethinking knowledge graph propagation for zero-shot learning\\u201d?\\n\\n>> Yes, it is possible to do that, and the performance is expected to be similar as the proposed DSGC.\\n\\nQ2. Discussion on the two ways for constructing the attribute affinity graph.\\n\\n>>Thank you for the suggestion. For both classi\\ufb01cation and clustering, we observe that in most cases DSGC with PPMI can achieve better performance than with Emb. This shows the effectiveness of PPMI in capturing meaningful word relations based on information theory and statistics (Church & Hanks, 1989), whereas Emb only relies on a distance metric for measuring word similarity. We have also revised the manuscript to include discussion on this.\\n\\n\\nQ3. \\u201cOne motivation is about the low-degree nodes, where the attribute graph might help. It would be good to have a study on the performance of the methods on those low-degree nodes.\\n\\n>>This is a good point. Actually, we already did that. In our experiments, we compared the proposed attributed graph convolution DSGC (XF) with MLP. The former outperforms the latter by a very large margin on all the three datasets. Note that this is an extreme case where each node has 0 degree (GCN reduces to MLP in this case), which shows that attribute graph convolution works well even when there are no links between nodes. We have revised the manuscript to emphasize this point in section 7.2 as you suggested.\"}",
"{\"title\": \"To All: Manuscript Update\", \"comment\": \"We would like to thank all the reviewers for their valuable time and feedback.\\n\\nWe have incorporated their suggestion and revised the manuscript accordingly. Major changes include: 1) In section 7, we improve the presentation of experimental results and include comparison with a suggested baseline LDS; 2) We reorganize the content of section 3 and make section 4 more compact; 3) In section 2, we add some discussion of recent related work on learning graph structures for graph neural networks.\\n\\nWe will release source code and datasets to ensure the reproducibility of our results.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a new 2-D graph convolution method to aggregate information using both the node relation graph and the attribute graph generated using, e.g., PMI and KNN. The motivation does make sense that using 1-D convolution along the node dimension might not enough for learning representation for those low-degree nodes. Then, the attribute relation might be used to further smooth the node representation. The assumption could be that documents in a class are likely to consist of similar (related) words. To achieve this, the information aggregation along the node and the attribute dimension is implemented via a product of three matrices, the node graph convolutional filter computed from node affinity matrix, the attribute graph convolutional filter computed the attribute affinity matrix given by PMI or KNN, and the node-attribute matrix, which can be one main contribution.\\nBesides, the paper also includes a detailed discussion of intra-class variance reduction. They evaluated the proposed method on both the mode classification and the node clustering against several existing methods, demonstrated that the proposed method almost always outperforms those methods on the two datasets. Overall, it is an interesting paper. \\n\\nFor aggregating the information along the node dimension and the attribute dimension, as mentioned in the paper, is it possible to firstly do the propagation over the attribute graph using a 1-layer GCN, followed by another 1-layer GCN over the node relation graph, similar to dense graph propagation module in \\u201cRethinking knowledge graph propagation for zero-shot learning\\u201d?\\n\\nThere are two ways to build an attribute graph. The experiments seem to show that the performance is quite different. It would be good to have some discussion on this. \\n\\nOne motivation is about the low-degree nodes, where the attribute graph might help. It would be good to have a study on the performance of the methods on those low-degree nodes.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work proposes a 2D graph convolution to combine the relational information encoded both in the nodes and in the edges.\\n\\nThe basic idea is to reduce the intra-class variance. The authors provide theorems and proofs to support this claim even though it is quite intuitive that smoothing with similar neighbours preserves higher variance with respect to dissimilar neighbours.\\n\\nIt is not straightforward to understand the limitations on the size of graphs.\\n\\nThe experimental analysis provides the empirical evidence of the properties of the proposed method. It is worthwhile to remark that connectivity is not necessarily helpful, like in the wiki dataset where connected nodes are not similar.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method that incorporates two types of graphs into a graph convolutional networks:\\n\\n(1) the given graph, which the authors refer to as node affinity graph, and \\n(2) the graph that models an affinity between node attribute values.\\n\\nThe main contribution of their work is a type of graph convolution that combines convolutions operating on these two graphs. \\n\\nThe paper is densely written and provides several mathematical derivations that are unnecessary to convey the proposed method. Personally, I don't see any benefits in having sections 3.1-3.4 in the main paper. The method actually proposed and evaluated in the paper is described in section 3.5. Sections 3.1-3.4 could be moved to an appendix. They confuse the reader more than they help. (They demonstrate knowledge of graph signal processing on the parts of the authors but little more.)\\n\\nTrying to provide some theoretical analysis of the proposed method (and standard graph convolutions) by showing that the intra-class variance is reduced is laudable. The theorems, however, only hold under strong assumptions and could, in my opinion, also be moved to an appendix. In the end, they don't have any bearing on the performance of the methods using real-world datasets. Adding some experiments to analyse to what extent the assumptions made by the theorems are met in the given datasets would be an interesting addition to the paper.\", \"the_authors_discuss_related_work_sufficiently_with_one_exception\": \"there has been recent work on learning the structure of graph neural networks. See for example [1]. The structure is derived/bootstrapped using node attribute similarities and it is shown that augmenting the graph with these new edges improves accuracy significantly. I would like to point the authors specifically to Figure 2 and Table 1 in said paper, where the authors show that adding edges (e.g., based on some node attribute affinity before or during training) is beneficial and improves accuracy. It would therefore be interesting to see how the authors proposed 2-D convolution would compare to a baseline where the edges based on attribute affinity are added to the original (node affinity) graph. It is a (somewhat simpler) alternative way to combine node affinity and node attribute graphs.\\n\\n[1] https://arxiv.org/pdf/1903.11960.pdf\\n\\nThe empirical results are mixed. Due to the numerous different variations of DSGC for which experiments were conducted, the difference between DSGC and existing methods is probably not statistically significant (a bonferroni correction was not performed to counteract the multiple comparisons). \\n\\nOverall this an interesting paper that introduces a way to incorporate node attribute affinity graphs. It is too densely written and could benefit from moving the theoretical parts to an appendix. They don't really add much to the core of the paper. Moreover, the authors do not consider approaches that also add edges to the graph (based, e.g., on attribute value similarity or during learning, see e.g. [1]) showing that that improves performance even when using a vanilla GCN. A comparison to a baseline that simply adds edges based on attribute affinity to the graph and applied a vanilla GCN should be part of the evaluation. The empirical results are mixed and don't show a clear advantage of the proposed method.\"}"
]
} |
HkgXteBYPB | Stochastic Neural Physics Predictor | [
"Piotr Tatarczyk",
"Damian Mrowca",
"Li Fei-Fei",
"Daniel L. K. Yamins",
"Nils Thuerey"
] | Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions. We show that our model’s long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines. Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks. | [
"physics prediction",
"forward dynamics",
"stochastic environments",
"dropout"
] | Reject | https://openreview.net/pdf?id=HkgXteBYPB | https://openreview.net/forum?id=HkgXteBYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"kdmSXxHAIK",
"S1gKVJ7isS",
"BygCcDMooS",
"BkgAQNfsoS",
"HJlYP0-jjS",
"B1l8ZyHCtS",
"SJxWh3W0KB",
"S1gbnPT3tS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748933,
1573756720742,
1573754773971,
1573753894456,
1573752417496,
1571864317885,
1571851433076,
1571768232534
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2430/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2430/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2430/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2430/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper presents a timely method for intuitive physics simulations that expand on the HTRN model, and tested in several physicals systems with rigid and deformable objects as well as other results later in the review.\\n\\nReviewer 3 was positive about the paper, and suggested improving the exposition to make it more self-contained. Reviewer 1 raised questions about the complexity of tasks and a concerns of limited advancement provided by the paper. Reviewer 2, had a similar concerns about limited clarity as to how the changes contribute to the results, and missing baselines. The authors provided detailed responses in all cases, providing some additional results with various other videos. After discussion and reviewing the additional results, the role of the stochastic elements of the model and its contributions to performance remained and the reviewers chose not to adjust their ratings.\\n\\nThe paper is interesting, timely and addresses important questions, but questions remain. We hope the review has provided useful information for their ongoing research.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"General response\", \"comment\": \"We want to thank all reviewers for their feedback. Our work was well received, addressing the very relevant and interesting problem of plausible multi-modal outcomes of intuitive physics scenarios (R1-3) with graph convolutions dropout as a \\u201cvery good contribution\\u201d (R1) and recurrent training and shape preservation as contributions to stabilize long-term predictions (R1-3). The results look good and valuable (R3), outperforming baselines especially on long-term predictions (R2). The final fully trained stochastic simulator has a positive impact on the training of model-free RL agents for physical manipulation tasks (R1-2). The paper is well written and easy to follow (R1) with weaknesses in the description of the system (R3) which will be addressed in the revised version by incorporating improvements in figures and text suggested by the reviewers. We have also added further experiments and attached high resolution videos in individual responses to address specific concerns of the reviewers about e.g. figure quality (R3), complexity of the experiments and overall performance of our physics predictor (R1, R3).\\n\\nAll in all, we hope that this illustrates that our contributions are far beyond incremental (R1, R2). Rather, they are a significant step towards fully learned, flexible intuitive physics models that are able to forward simulate complex multi-model physical systems with multiple possible outcomes. Using dropout on graph convolutions to generate multiple plausible trajectories is a simple yet elegant solution and has to our knowledge not been done before. The combination of improved shape loss and recurrent training greatly stabilizes long term predictions to be on par or better than methods which require specialized constraints and access to ground truth shapes. This in turn leads to greater flexibility across materials and different scenarios. We apologize if we didn\\u2019t do a good enough job in pointing out the differences to prior work before the rebuttal, but we have clarified these points and will include reviewer suggestions in our revised and improved manuscript.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you so much for the comments!\\n\\n\\u201cIt is not clear why authors do not provide a comparison to DPI-Nets (Li ICLR\\u201819). This model seems to be outperforming HRN...\\u201d\\n\\nWe have considered this architecture, and there are several reasons why we found the comparison to DPI-Nets not crucial for our work. To begin with, it is questionable if the DPI-Nets model clearly outperforms the HRN. HRN aims at solving a more difficult and general problem. As HRN works across different materials without special material constraints, we choose to work with the HRN instead of DPI net in our work. In rigid body simulations, DPI-Nets assumes rigidity of objects and predicts only rotation and translation, which is a different and much easier task. For rigid bodies this requires access to the undeformed ground truth shape over the whole prediction sequence. This contrasts with HRN, which fully learns the dynamics of each material without specialized constraints and does not require the ground truth shape to be provided at every prediction step. Our proposed model can predict correct dynamics and shapes in complex scenes without these constraints on par with DPI-Nets. A specific example is our scene involving three different, interacting objects, where all three are made from different materials: rigid, deformable and cloth. We also would like to point out that our experiments are at least on par with the complexity of the rigid fluid experiments used by Li et al.. Unnatural deformations for rigid and elastic soft bodies and cloth are much more salient to the human observer than wrong predictions of single fluid particles. We encourage reviewers to consider the following video, where we present our results of the simulations from the paper as well as additional more complex scenarios, which will be added to our manuscript: https://www.dropbox.com/s/59dalr767rlqhwm/Stochastic_Neural_Physics_Predictor_Experiments.mp4?dl=0\\n\\nFurthermore, the main focus of our work is finding ways of introducing stochasticity for simulating physical dynamics that reliably produce visually plausible results. The work of Li et al. did not explore this area, which makes the two works orthogonal in this regard. Nonetheless, DPI-Nets can potentially also benefit from our findings, both in terms of modelling stochasticity and improving prediction quality.\\n\\n\\u201cAuthors seem to acknowledge that the model is sensitive to the hyperparameter choice (dropout rates)...\\u201d\\n\\nWe included qualitative example rollouts for several different dropout rates in the Appendix. It is hard to find a good quantitative metric that captures the quality of rollouts as well as actual examples. Smaller perturbations might capture the mean better, but capture fewer possible modalities than larger perturbations, and the types and magnitude of perturbations might not transfer across different situations. Compared to input force and position perturbations, our dropout method significantly simplifies this problem to one parameter. For example with conventional perturbation methods, if we would like to perturb the simulation for colliding rigid cubes, it is not possibly to simply perturb each single particle. Instead, we need to rotate and translate the particles in accordance with the entire shape of the object such that it doesn\\u2019t deform unnaturally. For a soft object it would be even harder to hard-code plausibly looking perturbations. With our dropout method we only have to adjust one parameter to control how widely sampled trajectories are spread which is a huge simplification over previous methods. We find an even more constrained set of values between 0.7 and 0.9 to work across a wide range of scenarios. The attached video further illustrates the problem of hard coded input perturbations and the advantage of learned dropout perturbations which flexibly and automatically adapt to different scenarios: https://www.dropbox.com/s/59dalr767rlqhwm/Stochastic_Neural_Physics_Predictor_Experiments.mp4?dl=0 (at 0:48) \\n\\n\\u201cI find it a bit strange that results in Fig. 7-8 are for random seeds. Is it not possible to just plot an average for e.g. 10 runs? \\u201c\\n\\nThere is a clear intent behind plotting the curves separately. The training process in model-free RL is in most cases very sensitive not only to hyperparameters, but also the random seed. Our goal was to clearly show that agents learning in stochastic physical environments learn faster and converge to better policies than our baselines independent of the chosen random seed. Henderson et al. [1] showed that the variance between runs with different random seeds is sufficient to drastically change the resulting distributions hiding the true performance of RL algorithms. Hence, plotting the mean over multiple runs can be misleading. \\n\\n[1] https://arxiv.org/pdf/1709.06560.pdf\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you so much for the comments!\\n\\n\\u201cThe paper's contribution is very incremental. \\u201c\\n\\nThe use of dropout in graph convolutional networks for simulation of multi-modal problems is a simple yet elegant solution that is new to the best of our knowledge. We are glad that it has been mentioned as a \\u201cvery good contribution\\u201d by the reviewer. We agree that dropout itself is not a novel method. However, a naive implementation in graph convolutional networks has a negative impact on predictions. In our work, we proposed a novel and effective implementation in graph convolutional networks that does not lower system performance. In our experiments, it does not cause object shape degradation that is present when dropout is applied in a conventional way, as we discuss in Section 3.2. \\n\\nThe training process in reinforcement learning (RL) is sensitive to hyper parameters, often also to a random seed. Overparameterized systems are difficult to work with, due to e.g. computationally expensive hyperparameter searches, which is a reason why our proposed easy-to-use method for improving the training in model-free RL has the potential to be adopted on a wider scale. Our method can also be seen as a reward relaxation method. Due to its high effectiveness for a very common problem in RL, this contribution should not be underrated. Our proposed training method significantly outperforms other perturbation methods, e.g., commonly used action space noise, offering a promising future direction for RL community. \\n\\n\\u201cThere should be a discussion on the different type of methods to account for uncertainties...\\u201d\\n\\nAs mentioned in Section 2, we explored Mean-Variance-Estimation method [1] for predicting plausible object trajectories. At test time, sampling from independent per-particle normal distributions failed due to a lack of space-time consistency between object particles that are present in real objects. We will make this point clearer and underline that such an experiment has been conducted. We also evaluated this method as an uncertainty estimator, which gave reasonable results e.g indicating relatively high prediction uncertainty during collisions and force applications. We are happy to provide the results, although we decided to not include them in our original submission. As proposed by [2] Monte Carlo Dropout can be seen as a Bayesian approximation for inference. With this definition, it falls under the category of Bayesian Neural Networks. Furthermore, Bayesian neural networks are part of a group of algorithms that exhibit limited scalability, and as such are not applicable to large systems like ours. This is one of the motivations for our implementation via Monte Carlo dropout as a viable solution.\\n\\n\\u201cBecause the tasks are not very complicated, it is not clear how good the whole neural physics predictor is.\\u201d\\n\\nRegarding concerns about experiments not being complicated enough, we would like to argue that \\nour selected scenarios include several highly complex geometries, physical effects (collisions, object deformations) and materials (rigid, deformable/soft, cloth). Most recent papers in the intuitive physics domain still deal with simple 2D scenarios [3]. However, to show that our model works on a greater set of scenarios, we will include more examples in the revised version and we are happy to include more scenarios on acceptance which we could not include because of rebuttal time constraints.\\n\\nAs a response, we conducted more highly complex experiments in an extended set of scenarios, including new geometries and materials. We strongly encourage to watch the following video with visualizations of our results:\", \"scenario_1\": \"In \\u201cMultiple Materials\\u201d, three distinct geometries, each made of different materials (soft body, rigid body, cloth) interact in one scene. They are lifted, drop on the floor and collide with each other. In a stochastic simulation, the shapes are preserved well, positions are well distributed with respect to the ground truth and the trajectories are visually plausible.\", \"scenario_2\": \"In \\u201cBall hitting tower\\u201d (at 1:05 of the video), we compare our stochastic simulation method to simple input perturbation, but the performance of the physics predictor in a forward simulation can be assessed. This task includes a stack of 3 cubes, which are hit by a ball. Stack is a challenging case of a very frequent collision, which our model predicts well.\", \"scenarios_1_and_2\": \"https://www.dropbox.com/s/59dalr767rlqhwm/Stochastic_Neural_Physics_Predictor_Experiments.mp4?dl=0\", \"scenario_3\": \"We simulated the toy experiment presented in Figure 1. A ball falls on top of a pyramid.\\nThis experiment shows a clear advantage of a stochastic physics engine over deterministic ones.\", \"https\": \"//www.dropbox.com/s/btpkvgtf84zea8l/Stochastic%20Neural%20Physics%20Predictor%20Experiment%202.mp4?dl=0\\n\\n[1] https://ieeexplore.ieee.org/document/374138\\n[2] https://arxiv.org/pdf/1506.02142.pdf\\n[3] https://arxiv.org/pdf/1904.03177.pdf\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you so much for the comments!\\n\\n\\u201cThe description of the system is very verbose, but a concrete description of the system is only in the references, which make the manuscript hard to read and it does not fit well in a conference where one cannot assume that the audience is not an expert of this particular subfield.\\u201c\\n\\nWe apologize and understand that the description was not detailed enough. We will correct it in the new version of the manuscript. \\n\\n\\u201cAs a remedy, I would suggest, a simple one-hieararcy level architecture with hyperparameters to show what is really happening. One could use the system in Figure 1 as a more thorough example.\\u201d\\n\\nIn response, we simulated the situation presented in Figure 1 and included results in the attached video. A rigid ball falls on on top of a pyramid, bounces off and falls on the ground. Stochastic simulation clearly shows the ability of the proposed system to predict multiple plausible trajectories, while maintaining object shape. This shows that the introduced perturbation in the system in form of graph convolutional dropout is of appropriate type and magnitude for the situation. (Video with results: https://www.dropbox.com/s/btpkvgtf84zea8l/Stochastic%20Neural%20Physics%20Predictor%20Experiment%202.mp4?dl=0)\\nThe ablation study for different hierarchy parameters has been conducted in [1] in Appendix Section C. As we kept the original implementation of the hierarchy; we claim that these findings are applicable to our model.\\n\\n\\u201cThe results looks good and valuable, although the images provided are quite small.\\u201d\\n\\nWe will correct the figures in the new version of the manuscript. We encourage you to consider the following high resolution video that includes the experimental results from the paper as well as additional material: https://www.dropbox.com/s/59dalr767rlqhwm/Stochastic_Neural_Physics_Predictor_Experiments.mp4?dl=0\\n\\nIf you have further comments or questions regarding the paper, please do not hesitate to contact us. \\n\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B Tenenbaum, andDaniel LK Yamins. Flexible neural representation for physics prediction. InAdvances in NeuralInformation Processing Systems, 2018.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors describe a neural architecture with dropout randomizer with the aim of producing an ensemble of physical trajectories that seem plausible for a human eye. The question is relevant and interesting.\\n\\nThe authors bring in two improvements to the HRN model by (Mrowca et al., 2018). They drop the hierarchy from the shape loss, which is decipeted in Figure 3, and provide a change in the recurrent training decipited in Figure 4. Figure 4 is only showing the modified training, not the initial HRN training, although Figure 3 contains both. For the sake of consitency and good read the Figure 4 should contain the original training image.\\n\\nThe description of the system is very verbose, but a concrete description of the system is only in the references, which make the manuscript hard to read and it does not fit well in a conference where one cannot assume that the audience is not an expert of this particular subfield.\\n\\nAs a remedy, I would suggest, a simple one-hieararcy level architecture with hyperparameters to show what is really happening. One could use the system in Figure 1 as a more thorough example.\\n\\nThe results looks good and valuable, altough the images provided are quite small.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary\\n\\nThere has been work on deep learning based forward dynamics model to learn the dynamics of physical systems. In particular, the hierarchical relation network (HRN) as proposed by Mrowca et al. (2018). However, HRN is deterministic. This is problematic for long-term predictions because of the uncertainty of physical world. This paper builds on top of HRN. It proposes a Monte Carlo sampling based graph-convolutional dropout method that can sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. It also introduces a shape preservation loss and trains the dynamics model recurrently to better stabilize long-term predictions. It demonstrates the two techniques improve the efficiency and performance of model-free reinforcement learning agents on several physical manipulation tasks.\\n\\nStrengths\\n\\nLearning physics models which accounts for the multi-modal nature of the problem is very important. Graph-convolutional dropout is one method to deal with the multi-modal nature of the problem. It is a very good contribution. \\n\\n The tasks evaluated are not very sophisticated. \\n\\nWeaknesses\\n\\nThe paper's contribution is very incremental. \\n\\nThere should be a discussion on the different type of methods to account for uncertainties, e.g. bayesian neural networks and how they differ in terms of multi-modal predictions.\\n\\nBecause the tasks are not very complicated, it is not clear how good the whole neural physics predictor is.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overview:\\nThis paper introduces a method for physical dynamics prediction, which is a version of hierarchical relation network (Mrowca \\u201818). HRNs work on top of hierarchical particle-based representations of objects and the corresponding physics (e.g. forces between different parts), and are essentially are graph-convolutional neural networks.\\nUnlike the original work, the proposed method introduces several improvements: 1. an updated loss function, that adds distance constraints between *all* the particles of the object. 2. recurrent training, when the predictions are fed to the inputs. 3. adding dropout.\", \"writing\": \"The paper is relatively well-written and easy to follow.\", \"evaluation\": \"Authors compare their model on a dynamics prediction task and seem to outperform the original HRN, especially on longer-term sequences. In addition they report results for trajectory sampling (qualitative) and model-free RL, where using their model as a stochastic simulator seems to have positive impact on agent training.\", \"decision\": \"Although the proposed improvements upon HRN generally make sense, it is not clear if those are very significant on their own: adding dropout and recurrent training do not seem particularly novel and, since there is no ablation study, it is hard to see what exactly contributes to the reported improvements. \\nAs for the experimental evaluation, it seems like important baselines are missing, and the model seems to be very sensitive to hyperparameters (see questions). Thus, I am currently leaning more towards a rejection, hence the \\u201cweak reject\\u201d rating.\\n\\nVarious questions / concerns:\\n\\n* It is not clear why authors do not provide a comparison to DPI-Nets (Li ICLR\\u201819). This model seems to be outperforming HRN, and from what it looks like is publicly available: https://github.com/YunzhuLi/DPI-Net. I would encourage authors to provide comparison to this baseline, and potentially on similar sets of experiments, or explain why this comparison would not be possible (which seems unlikely).\\n\\n* Authors seem to acknowledge that the model is sensitive to the hyperparameter choice (dropout rates), however, there is no numerical evaluation that would help readers understand how critical this choice is for the final performance. Judging from very specific settings in different experiments, this could be a serious concern.\\n\\n* I find it a bit strange that results in Fig. 7-8 are for random seeds. Is it not possible to just plot an average for e.g. 10 runs?\", \"update\": \"I would like to thank the authors for a detailed response!\", \"it_seems_like_there_is_a_common_concern_about_the_novelty_among_reviewers\": \"improvements over HRN are quite incremental. Although authors provide a verbal justification for not comparing to another strong baselines, I do not see why would it not be possible to compare methods in the similar settings, even though that baseline might be more limited.\\nGenerally, if the main contribution is actually only the \\\"stochastic\\\" part and not improved performance, then just adding dropout does not seem like a particularly novel approach to me: whether the convolutions are on graphs or on euclidean domains, this does not change the way dropout is done.\"}"
]
} |
HklQYxBKwS | Neural tangent kernels, transportation mappings, and universal approximation | [
"Ziwei Ji",
"Matus Telgarsky",
"Ruicheng Xian"
] | This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization. Concretely, the paper has two main contributions: a generic scheme to approximate functions with the NTK by sampling from transport mappings between the initial weights and their desired values, and the construction of transport mappings via Fourier transforms. Regarding the first contribution, the proof scheme provides another perspective on how the NTK regime arises from rescaling: redundancy in the weights due to resampling allows individual weights to be scaled down. Regarding the second contribution, the most notable transport mapping asserts that roughly $1 / \delta^{10d}$ nodes are sufficient to approximate continuous functions, where $\delta$ depends on the continuity properties of the target function. By contrast, nearly the same proof yields a bound of $1 / \delta^{2d}$ for shallow ReLU networks; this gap suggests a tantalizing direction for future work, separating shallow ReLU networks and their linearization.
| [
"Neural Tangent Kernel",
"universal approximation",
"Barron",
"transport mapping"
] | Accept (Poster) | https://openreview.net/pdf?id=HklQYxBKwS | https://openreview.net/forum?id=HklQYxBKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"bxhWVT1osg",
"wmjX6QIqw2",
"r1eVwpYnjH",
"HkgFVpK2oS",
"HklU-TY2iB",
"B1edT2t3sr",
"H1edMNKV9r",
"Hyge3AC3FB"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1581728506731,
1576798748904,
1573850459597,
1573850416533,
1573850365595,
1573850304471,
1572275216198,
1571774119871
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2429/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2429/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2429/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2429/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Thank you for the detailed post-rebuttal comments\", \"comment\": \"Thank you for evaluating our revised paper; it was a lot of work for you, given our extensive changes. We thank you for your very thorough and valuable comments.\\n\\nIn response to your post-rebuttal comments, we've updated our \\\"open problems\\\" section to expand on these gaps.\\n\\nTo respond informally to you here, I agree, it is odd. Of course, the \\\"10\\\" we have must be an an analytic artifact, however I don't know what it should be. In that new open problem comment (which is admittedly quite brief), I highlight both the choice you mention (which layers do you train), but also the question of norm (our paper here is in the \\\"NTK standard\\\" (2,infty) norm). Would be nice to know all the gaps, which choices are relevant in practice, how they affect optimization and generalization, how depth changes things, ...\\n\\nThanks again!\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper considers representational aspects of neural tangent kernels (NTKs). More precisely, recent literature on overparametrized neural networks has identified NTKs as a way to characterize the behavior of gradient descent on wide neural networks as fitting these types of kernels. This paper focuses on the representational aspect: namely that functions of appropriate \\\"complexity\\\" can be written as an NTK with parameters close to initialization (comparably close to what results on gradient descent get).\\n\\nThe reviewers agree this content is of general interest to the community and with the proposed revisions there is general agreement that the paper has merits to recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"(Continuation of response to AnonReviewer3.)\", \"comment\": \"In response to other comments:\\n\\n- Regarding the sampling tool in the reviewer's reference [1,\\n Proposition 1], in fact the dependence is nearly identical to ours:\\n our proof also gives a supremum over the \\\"basis\\\" (in this case\\n depending on the transport T) and an L_2 dependence on the data\\n measure; for simplicity we've enforced ||x|| <= 1 and hidden this L_2\\n as it is generally small compared to the dependence on T. We further\\n note that these sampling tradeoffs are also similar to those in [2,\\n Proposition 1], cited by the reviewer (and by the same author).\\n\\n- Regarding optimality of our rates, we have included concrete\\n discussions of lower bounds, one from the reviewer's reference [2],\\n and another from a paper by Yarotsky in our revisions. Concretely,\\n both of these reference suggest lower bounds of the form 1/eps^{d/2}\\n if weights are not required to be close to initialization, where in\\n our case our upper bounds are closer to 1/eps^{10d} while lying close\\n to initialization. We further note that the lower bound in [2] has\\n further restrictions (e.g., data on the sphere), and Yarotsky's paper\\n also has a higher lower bound 1/eps^{d} when the constructions are\\n \\\"continuous\\\", which we discuss in our revisions. The upper bounds\\n (not in the NTK setting) presented in [2] are roughly tight with the\\n lower bounds, though as we mentioned the setting there is uniform on\\n the sphere. We also note that our tools here (giving 1/eps^{10d} in\\n the NTK setting) give a much better 1/eps^{2d} when applied to regular\\n networks with a single hidden layer, as now included in section 4.\\n\\n- Despite pushing the RKHS material farther back, we have expanded it,\\n and hope it is clearer now. We have removed the universal\\n approximation comments, since they are contained in the work by Sun et\\n al.\\n\\n- Regarding the NTK optimization literature, we hope that the present\\n results can be helpful in proving good test error bounds. As a\\n concrete example, a work we cite by Arora et al generalizes well if\\n the quantity y^T (H^\\\\infty)^{-1} y is small, which is saying that the\\n features and labels share some structure. Writing the optimal labels\\n as function of the inputs (i.e., the least squares solution over the\\n population), one can now introduce our tools, and hopefully develop\\n more concrete rates; we have included a version of this comment\\n in the concluding open problems section.\\n\\n- The epsilon factor in the NTK definition appears in a variety of\\n works, and corresponds to the initialization of the second layer\\n weights; see for instance the work by [ Allen-Zhu, Li, Liang ] which\\n we cite. This scaling factor appears crucial and we highlight it\\n better in our new version (e.g., see the blue bolded terms on pages\\n 3 and 4).\\n\\n- We have standardized our notation in the body to write sigma' in place\\n of an indicator, though on page 2 we write the indicator once for sake\\n of concreteness, and also write it in some proofs.\\n\\n- The new lemmas 3.2 and 3.3 clearly fix the first d coordinates as 0.\\n The second part of lemma 3.1, as we discuss in section 5, can be used\\n to form a mapping that uses all coordinates, but it does not seem to give\\n better bounds (after all it is similarly just a rescaling). Regarding\\n what is lost with truncation, unfortunately it is unclear. To\\n highlight where the inefficiencies in our techniques may lie, we have\\n included Theorem 4.5, which uses our techniques to prove a width upper\\n bound on approximating continuous functions with random features, and\\n it improves the earlier 1/eps^{10d} to 1/eps^{2d}, and uses no\\n truncations. We are not asserting here that the truncations lead to\\n the extra factors, however certainly they make the proofs and bounds\\n significantly messier, and correspondingly this makes it harder to\\n produce tight bounds.\\n\\n- The \\\"B\\\" for the RKHS transports indeed depends on the RKHS norm; the\\n earlier attempt at rushing to the main theorem within two pages made\\n this unclear, but it should now be clear, with two separate bounds\\n now appearing in section 5, both depending on the RKHS norm.\\n\\n- The Hilbert space we develop to discuss RKHSes is indeed the same as\\n L2(G); we have included this clarification, thank you.\\n\\n- Thank you for the four references, we have included them.\"}",
"{\"title\": \"Response to AnonReviewer3.\", \"comment\": \"We thank the reviewer for their thorough comments.\\n\\nWhat we wish to primarily highlight is that we have heavily restructured\\nand polished the paper. As the paper only has two reviews, we hope the\\nreviewer is able to find the time to at least go through the new\\nabstract and introduction. We have summarized the changes in a separate\\ncomment above, but a list of notable changes which match the comments of\", \"the_reviewer_are_as_follows\": \"- In response to \\\"the main result appearing in the introduction with\\n little details on the involved quantities\\\", we have (a) pushed the\\n main result back to page 3, (b) pushed before it the key idea,\\n originally in section 2, that one can easily take a network written\\n with a transport and obtain another (wider) network where weights are\\n close to initialization, (c) focused the main theorem and introduction\\n on the approximation of continuous functions, with all relevant\\n quantities defined within the theorem or before, (d) moved the other\\n parts of the theorem later, and better modularized and pre-empted\\n them.\\n\\n- Regarding \\\"no clear separation or connection between intermediate\\n lemmas in later sections, and very little motivation and explanation\\n of some results\\\", after moving parts of the main theorem out and\\n pushing them to other sections, we then better modularized the\\n different results and sections, and included further explanations. It\\n is for this reason that the paper is now longer. As a concrete\\n example, Section 2, on sampling, now starts with a single main theorem\\n encapsulating the technique; before, this theorem was split into\\n parts, and much more unwieldy to apply.\\n\\n- Regarding \\\"many typos and inconsistent notations\\\", we have performed\\n extensive editing.\\n\\nWe hope the reviewer finds the presentation vastly improved, and\\nappreciate further comments.\"}",
"{\"title\": \"Response to AnonReviewer2.\", \"comment\": \"We thank the reviewer for their comments and support.\\n\\nIn particular, we are grateful for the comment that the \\\"paper is\\nwritten well, and easy to read\\\", and hope it does not seem to odd that\\nwe went through a significant restructuring. We hope the paper has only\\nmore of what AnonReviewer2 liked, not less.\", \"regarding_specific_comments\": \"- Rather than merely re-ordering the main theorem, we have simplified it\\n as discussed in our revision summary, moving all but the continuous\\n function approximation to other sections.\\n\\n- We agree that the title is suggestive of optimal transport, and indeed\\n worked both before and after the deadline to develop such a\\n connection. Unfortunately, it seems somewhat elusive to derive\\n anything concrete, and have included a brief remark in the open\\n problem section; thank you for highlighting this.\"}",
"{\"title\": \"List of revisions.\", \"comment\": \"We have restructured the paper and improved presentation, thanks to\\nhelpful feedback from AnonReviewer3.\\n\\nA summary of the major re-arrangements is as follows.\\n\\n- In order for the main theorem to be more digestible, it has been pared\\n down to only discuss continuous functions, and its other components\\n have been pushed to later sections.\\n\\n- To further aid in the exposition of the main theorem, more\\n explanations have been moved before it; notably the central\\n description of the sampling method, and how it gives rise to the NTK\\n setting of small weight changes, has been moved from section 2 to page\\n 2 of the introduction.\\n\\n- The remaining sections have been made more modular, and there are now\\n six sections in place of four.\\n\\n- In a bit more detail: section 2 still contains the old sampling\\n routines, though with re-organized exposition, and a single main\\n sampling theorem at the start encapsulating the technique; section 3\\n now contains only the Fourier-based transportation mappings; section 4\\n approximates continuous functions; section 5 briefly describes\\n \\\"abstract\\\" mappings, including the old RKHS-based transports; section\\n 6 concludes with open problems.\\n\\n(Note that uploading of new abstracts is seemingly disabled; the revision\\nhas a new abstract corresponding to the newly focused presentation.)\\n\\nIn order to flesh out the story and aid exposition (but not changing the core\\nresults and specifically not trying to burden reviewers), a few new extensions\", \"have_been_added\": \"- The tools of the paper have been applied directly, in Theorem 4.5, to\\n approximation via regular shallow networks (and not the NTK); the\\n approximation rates improve, revealing a tantalizing direction for\\n future work.\\n\\n- Appendix B summarizes the low level sampling routines, and now\\n includes uniform norm sampling tools, which are applied in the proof\\n of Theorem 4.5 as a demonstration; in particular, L_2(P) was not\\n essential.\\n\\nThe revisions also include numerous other typo fixes, missing\\nreferences, and other corrections, with many thanks to the reviewers.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: the paper consider representational aspects of neural tangent kernels (NTKs). More precisely, recent literature on overparametrized neural networks has identified NTKs as a way to characterize the behavior of gradient descent on wide neural networks as fitting these types of kernels. This paper focuses on the representational aspect: namely that functions of appropriate \\\"complexity\\\" can be written as an NTK with parameters close to initialization (comparably close to what results on gradient descent get).\\nThe main technical ingredients are a constructing a \\\"transport\\\" map via a Fourier-expansion style averaging (ala Baron), and subsequently subsampling this average ala Maurey-style analyses to get a finite width average.\", \"the_authors_also_identify_function_classes_which_are_well_behaved_with_respect_to_these_techniques\": \"smoothed functions (via convolving with a Gaussian), functions which have a small RKHS norm (for an appropriate RKHS derived from NTKs), functions with small modulus of continuity.\", \"evaluation\": \"the paper is a strong contribution, on a topic which is of great current interest, and I recommend acceptance. It is very nice that many of the standard tools in approximation theory (Fourier expansions, Maurey sampling, etc.) play nicely with NTKs, and also that the scaling of the # of neurons necessary that appears in the current literature can be also recovered via a representation theoretic viewpoint. The paper is written well, and is easy to read.\", \"minor_comments\": [\"I'd rearrange the bullets bounding B_{f,\\\\epsilon} for the various subcases of Theorem 1.3: I think the RKHS is the most \\\"vanilla\\\" bound, given that you can extract a RKHS; bounds in terms of the modulus of continuity should go next (this is the \\\"weakest\\\" assumption); smoothed f's should go last (this is like a smoothed complexity kind of result)\", \"w_f isn't defined until section 3.2 -- I'd put a pointer in the statement of Theorem 1.3 to the equation, not just the section.\", \"I'm not sure \\\"transport\\\" is the ideal term -- it brings to mind \\\"optimal transport\\\", and I kept expecting some Wasserstein connection.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies approximation properties (in L2 over some data distribution P) of two-layer ReLU networks in the NTK setting, that is, where weights remain close to initialization and the model behaves like a kernel method given by its linearization around initialization.\\n\\nThe authors obtain a variety of results in order to obtain such approximation guarantees, which are obtained by sampling from a so-called 'transport mapping', which is essentially a function T:R^{d+1}->R^{d+1} with a bound on sup_w ||T(w)||, which can approximate well various classes of target functions (section 3).\\nIn particular, they show that such a sampling leads to weights close to initialization, that the neural network function is close to its linearization in L2(P), and that the linearization is close to the target function in L2(P).\\nTogether with a control of the norm of T required to approximate the target function, this leads to approximation bounds in Theorem 1.3.\\n\\nThe techniques used to obtain transport mappings are quite interesting and seem novel, and the general approach for controlling various steps from neural network function to the target function in L2(P) norm in the NTK setting is insightful and novel as far as I know.\\nThat said, the presentation lacks a certain amount of polish in its present form, which makes me lean towards the reject side. I also have some comments related to novelty of certain aspects.\", \"comments\": [\"the paper is not well organized, with the main result appearing in the introduction with little details on the involved quantities, no clear separation or connection between intermediate lemmas in later sections, and very little motivation and explanation of some results. Further, there are many typos and inconsistent notations throughout which make the paper hard to read.\", \"the sampling result in Lemma 2.1 is very similar in flavor to random feature approximation results (for the NTK here), e.g., [1, Proposition 1], which could perhaps be more precise in practice as it is data-dependent, and only needs an L2 control on the function T. Can this be applied here or would the initialization term mess things up? A comparison would be helpful either way.\", \"the approximation rates should be discussed more (are they optimal?), and compared to prior work, both on general two-layer networks, and kernels arising from a similar setup, in particular in [2] and the cited Sun et al. (note that the NTK behaves similarly in terms of approximation, see [3])\", \"the section on the \\\"natural RKHS\\\" is largely unclear, as is the corresponding bound in Theorem 1.3 (shouldn't B be proportional to the RKHS norm?)\", \"how these results apply to networks obtained via optimization in the NTK regime should probably be discussed more\"], \"smaller_things\": [\"eq (1.2): what is the mearning of the epsilon factor? is it standard?\", \"p.2 \\\"to not yield\\\" -> do not yield\", \"\\\"with scaling... width\\\": rephrase (and, do you mean dataset size?)\", \"\\\"one 1/m is then pushed\\\" -> one 1/sqrt(m)?\", \"throughout: pick a consistent notation for derivative of relu (sometimes it's sigma', sometimes an indicator)\", \"section 2: here it seems like T(w)/(eps sqrt(m)) is just the movement from initialization and T_m,eps(w) are the final weights, while in the introduction T(w) indicates the final weights, this is confusing notation. Also, shouldn't T_m,eps appear in the bounds?\", \"section 3: should -> shows?\", \"lemma 3.1, 3.2: specify that the other coordinates are 0, also missing dG(w) in the definition of g. What do we lose from the use of truncation?\", \"lemma 3.5: sup_x or just L2(P)?\", \"section 3.3: H is just L2(G) here? Also, the kernel is not universal with only even terms, but the bias fixes that, see e.g. [2,4].\", \"[1] Bach. On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions (2017)\", \"[2] Bach. Breaking the Curse of Dimensionality with Convex Neural Networks (2017)\", \"[3] Bietti and Mairal. On the Inductive Bias of Neural Tangent Kernels (2019)\", \"[4] Basri et al. The Convergence Rate of Neural Networks for Learned Functions of Different Frequencies (2019)\", \"===== update post rebuttal =====\", \"Thanks for the detailed response, I am increasing my score as the new version looks much better.\", \"I am a bit puzzled (and surprised) by the gap in the rates between NTK and relu random features, as it seems to suggest that only training second-layer weights while leaving the first layer at random initialization yields better rates than the NTK regime, if I understand correctly? If so, is this due mainly to the linearization step, i.e. Lemma 2.6? It would be good to include some further discussion on this in the paper.\", \"As per approximation by random features/sampling, note that [1, Prop. 1] only requires a sup control on random features (which is quite trivial here with bounded data), not on the \\\"transport\\\" (beta in their statement).\"]}"
]
} |
BJgMFxrYPB | Learning to Move with Affordance Maps | [
"William Qi",
"Ravi Teja Mullapudi",
"Saurabh Gupta",
"Deva Ramanan"
] | The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {\em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance. | [
"navigation",
"exploration"
] | Accept (Poster) | https://openreview.net/pdf?id=BJgMFxrYPB | https://openreview.net/forum?id=BJgMFxrYPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jUclfT2jm",
"SylGeCPhoB",
"SylkMA1nsH",
"S1lp6KRsir",
"BkxdBkAoiS",
"ryguP1umjH",
"ByeZGyumiH",
"Hkeuc0v7jB",
"HJxqDRDXiB",
"H1eF2cVy5r",
"BygsJ2jatS",
"SkgvG0dpFr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748876,
1573842410251,
1573809670640,
1573804484842,
1573801791563,
1573252959922,
1573252873470,
1573252752376,
1573252706043,
1571928752587,
1571826658611,
1571814927210
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2427/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2427/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2427/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a framework for navigation that leverages learning spatial affordance maps (ie what parts of a scene are navigable) via a self-supervision approach in order to deal with environments with dynamics and hazards. They evaluate on procedurally generated VizDoom levels and find improvements over frontier and RL baseline agents.\\n\\nReviewers all agreed on the quality of the paper and strength of the results. Authors were highly responsive to constructive criticism and the engagement/discussion appears to have improved the paper overall. After seeing the rebuttal and revisions, I believe this paper will be a useful contribution to the field and I\\u2019m happy to recommend accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper updated to address discussed topics.\", \"comment\": \"Hi Reviewer 3,\\n\\nAs requested, we've updated the paper to include additional discussion about handling of dynamics, real-world applicability, along with plans to release open source code in the appendix.\\n\\nWe've also updated the main text to refer to the appendix for additional details where relevant.\"}",
"{\"title\": \"Thanks.\", \"comment\": \"Thanks for the clarification.\"}",
"{\"title\": \"Will make all requested changes to address discussed topics.\", \"comment\": \"Thank you again for the insightful feedback!\\n\\nWe are happy to make all of the requested changes and will update the next revision of the paper with additional discussion about handling of dynamics and real-world applicability, along with plans to release open-source code.\\n\\nAs the discussion period is ending very soon, we will initially include such discussion in the appendix and will move to integrate these topics into the main text afterwards.\"}",
"{\"title\": \"Thank you for addressing each of my points.\", \"comment\": [\"Your answer regarding dynamics makes a lot of sense. If you could include some textual discussion similar to your answer into the paper, I believe it would help a lot in contextualizing it. It's fine if that happens in the Appendix, as long as you refer to it from the main paper (maybe in the intro).\", \"I am also happy with your answer regarding real-world applicability (in your answer to R1), and again I highly recommend adding such discussion into the main paper.\", \"Finally, it would be good to mention open-sourcing in the paper itself.\", \"If you are able to include these in the paper, I am happy to update my recommendation to accepting this paper.\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for the constructive and detailed feedback, we are happy to address your questions and concerns, and will update the paper to improve the clarity of exposition.\\n\\n[Dealing with Environmental Dynamics]\\n>> How can this approach work for moving obstacles? Let's say a monster walks from point A to point B, and collides with the agent at point B. Then, point B is marked as a hazard, but in the previous frames, the monster is not located at point B, and thus an image region that does not contain the monster is marked as hazard. Am I missing something here?\\n\\nThis is a great question! In the scenario that you have posed, it\\u2019s true that if the monster moves between when the agent was at point A and when the agent reached point B, the hazard label will map to an image region near the monster, rather than the monster itself. Let us first describe one approach for explicitly modeling such moving obstacles, and then justify why our current approach implicitly captures such dynamics.\", \"explicit_approach\": \"in principle, one can modify our self-supervised labeling system to replace naive back-projection with an explicit image-based tracker (keeping all other components fixed). Essentially, track hazards backward from the final timestep at which they are identified as hazardous (since those prior visual observations are available at sample time) and obtain their precise image coordinates when backprojecting those prior timesteps.\", \"implicit_approach\": \"Even without image-based tracking, our pipeline implicitly learns to associate larger safety margins for visual signatures that are dynamic. Essentially, our system learns to avoid regions that are spatially nearby dynamic objects (please refer to Fig 11 in the updated appendix). Such notions of semantic-specific safety margins (e.g., autonomous systems should use larger safety margins for people vs roadside trash cans) are typically hand-coded in current systems, but these emerge naturally from our learning-based approach.\\n\\nBecause we found success with an implicit encoding of dynamics, we did not experiment with explicit encodings. But we agree that it would be interesting future work.\\n\\n[Real World Generalization]\\n>> The method does not seem practical for actual mobile robots, only for in-game or in-simulation agents. The reason being that in order to learn \\\"robot should not bump into baby\\\", the robot actually needs to bump into multiple babies in order to collect data about that hazard. To be fair, blind \\\"PPO+exploration bonus\\\" suffers from the same problem, but in this paper, the whole motivation is about mobile robots (at least that was my impression after reading it). \\n\\nThis is another great question! We refer R3 to our second response to R1, who shared a very similar concern.\\n\\n[Code Release]\\n>> Will code be released?\\nWe understand that the described system contains a rather large number of moving parts and hyper-parameters, which could be challenging to reproduce. To address this concern, we plan to release code for the full system with modular components for sampling, model training, map construction, planning, and locomotion. We hope that our modular code will enable researchers to re-use modules and swap out individual components to try out new approaches.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThanks for your constructive feedback! We agree that imitation learning from human demonstration is interesting, and provide some thoughts on how it could fit with our framework.\\n\\nOur experimental results in the paper demonstrate that humans are exceptionally good at both exploration and navigation, beating all autonomous approaches even without any previous experience performing the task at hand. We hypothesize that this performance is largely explained by strong priors that our human participants have built up over time from previous video games; e.g., recognition that \\u201cred lava is probably hazardous\\u201d speeds up learning. \\n\\nThe most straightforward way to incorporate imitation learning would be to train a model that maps directly from visual inputs to action outputs, attempting to mimic human actions from a training distribution. However, this approach is associated with a well-known drawback of imitation learning: once the model makes a mistake and veers off policy, it is difficult to recover. For example, once the agent gets stuck in a corner, it may not have encountered a training sample that reveals how to escape (because humans are unlikely to make such mistakes).\\n\\nInstead, we point out that imitation learning can be applied within our factored approach for learning affordance maps. Specifically, we can make use of expert strategies for exploring and actively sampling new environments (for which previously-acquired priors are of little help). Dubey et al. [5] ingeniously create a 2D-platformer gaming world where visual signatures are systematically masked to eliminate the applicability of visual priors, making it dramatically harder for humans to navigate. In such environments, humans must \\u201cprobe\\u201d each new texture in order to understand its effects, analogous to the sampling process employed by our agent in the active learning loop. Given that our approach requires the collection of samples numbering in the thousands and humans have been shown to adapt to novel environments with far fewer, it seems promising to apply imitation learning for the task of learning of better sampling policies.\\n\\n[5] Dubey, R., Agrawal, P., Pathak, D., Griffiths, T. L., & Efros, A. A. (2018). Investigating human priors for playing video games. arXiv preprint arXiv:1802.10217.\"}",
"{\"title\": \"Response to Reviewer 1 (2/2)\", \"comment\": \"[Real World Robotics Deployment]\\n>> The \\\"trial and error\\\" method is clearly not viable for robotics setups, as hazards are costly. It would be nice if the authors could give their perspective on things.\\n\\nTo share our perspective, while it is true that taking a \\u201ctrial and error\\u201d approach to sampling could lead to potentially hazardous situations in real-world robotics settings, we believe that this is not an unreasonable way of collecting data and that there exist practical solutions for risk mitigation that have already been widely deployed.\\n\\nFraming the current state of self-driving research within the context of our work, we can view all autonomous vehicles today as being within the \\u201csampling\\u201d stage of a long-term active learning loop that ultimately aims to enable L4 autonomy. Almost every one of these vehicles on public roads today is equipped with one, if not multiple safety operators who are responsible for disengaging autonomy and \\u201ctaking over\\u201d when the system fails to operate within defined bounds for safety and comfort. Moreover, each of these \\u201ctakeover\\u201d scenarios is logged and used to improve the underlying models in future iterations of this learning loop [3]. Indeed, in this scenario, the safety operator serves the purpose of the \\u201cfeedback sensor\\u201d and can ultimately be removed at \\u201ctest time\\u201d, once the autonomous driving model has been deemed safe.\\n\\nIn less safety critical scenarios, such as closed course or small-scale testing, the role of the safety driver could be replaced with some form of high-frequency, high-resolution sensing such as multiple short-range LIDARs. These feedback sensors can be used to help the robot avoid collisions during the initial stages of active training, stopping the agent and providing labels whenever an undesirable state is entered. Importantly, since data from these expensive sensors is not directly used as an input by the model, they can be removed once a satisfactory model has been trained; production-spec robots are free to employ low-cost sensing without the need for high-cost feedback sensors.\\n\\nAdditionally, there exist many scenarios in which feedback sensors can help label examples without the need to experience catastrophic failures such as high-speed collisions. One example is the discrepancy between wheel speed sensor values, which can be used to detect loss of traction on a wheeled robot when travelling over rough or slippery surfaces. These labels could them be used to train a model that generates learned affordance maps to help such a robot navigate over the smoothest terrain.\\n\\nFinally, we would like to emphasize that in scenarios where it is difficult to obtain oracle-labelled data and \\u201ctrial and error\\u201d approaches are employed by necessity, we have shown that our proposed approach is many times more sample efficient that previous PPO-based reinforcement learning approaches for mobile robotics [4] (which, as R3 notes, also suffer from the same types of problems). If collecting a sample is costly due to the burden of operational hazards, we believe that a reduction in the number of samples required translates to an improvement in overall safety.\\n\\n[3] Dixit, V. V., Chand, S., & Nair, D. J. (2016). Autonomous vehicles: disengagements, accidents and reaction times. PLoS one, 11(12), e0168054.\\n[4] Chen, T., Gupta, S., & Gupta, A. (2019). Learning exploration policies for navigation. ICLR 2019\"}",
"{\"title\": \"Response to Reviewer 1 (1/2)\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you very much for your positive and constructive feedback, we are glad to hear that you found our work to be practical and interesting. To address your comments and concerns:\\n\\n[Terminology]\\nWe have updated our paper to correct our use of the term \\u201cinformation gain\\u201d, apologies for any confusion with our choice of terminology in the original text. Additionally, thank you for bringing the multiple definitions associated with the term \\u201cself-supervision\\u201d to our attention, we will be sure to keep this in mind going forward.\\n\\n[Related Work]\\n>> *Learning* a model of the environment and using it for navigation/exploration has also been tackled recently by [2]. I think the authors should draw connections to that work.\\n\\nMirchev et al. propose an interesting method of learning a generalized spatial representation that can be used for both navigation and exploration. The approach employs a deep sequential generative model and roughly metric map to reconstruct observations using pose-based attention, sharing our view that structured intermediate representations are important. Thank you for pointing out this piece of related work, we have cited and briefly compared our approach in the updated version of the paper.\\n\\nWe believe that the primary similarities between our work and that of [2] are that both approaches construct metric maps that contain information used to infer affordances at particular locations in world space, using models trained with dense supervision. However, our approach employs a predictive model, rather than an attention-based generative model. Additionally, we plan directly on top of a metric cost map, whereas the map employed by [2] is in latent space, with planning for navigation occurring in belief space. Another difference is that [2] employs observation reconstruction as a training signal, whereas we employ sensor feedback coupled with back-projection. Finally, a major difference is that the concept of affordance in our evaluation environments depends heavily on both dynamics and semantics, two types of constraints that [2] does not address (as affordances in their evaluation are defined solely by geometry).\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes an interesting, and to the best of my knowledge novel, pipeline for learning a semantic map of the environment with respect to navigability, and simultaneously uses it for further exploring the environment.\", \"the_pipeline_can_be_summarized_as_follows\": \"Navigate somewhere using some heuristic. When navigation \\\"works\\\", as well as when encountering something \\\"negative\\\", back-project that into past frames, and label the corresponding pixels as such: either positive or negative. This generates a collection of partially densely labelled images, on which a segmentation network can be learned that learns which part of the RGBD input are navigable and which should be avoided. For navigation, navigability of the current frame is predicted, and that prediction is down-projected into an \\\"affordance map\\\" that is used for navigation. One experiment confirms the usefulness of such an affordance map.\\n\\n\\nI am marking weak reject currently because of the following concerns, which might be me just missing something. On the one hand, I am glad to see something that is not just blind \\\"end to end RL with exploration bonus\\\", sounds reasonable, and works well. On the other hand, I do have several major concerns about the method, outlined as follows:\\n\\n1. How can this approach work for moving obstacles? Let's say a monster walks from point A to point B, and collides with the agent at point B. Then, point B is marked as a hazard, but in the previous frames, the monster is not located at point B, and thus an image region that does not contain the monster is marked as hazard. Am I missing something here?\\n2. The method does not seem practical for actual mobile robots, only for in-game or in-simulation agents. The reason being that in order to learn \\\"robot should not bump into baby\\\", the robot actually needs to bump into multiple babies in order to collect data about that hazard. To be fair, blind \\\"PPO+exploration bonus\\\" suffers from the same problem, but in this paper, the whole motivation is about mobile robots (at least that was my impression after reading it).\\n\\nFurthermore, I do not think I would be able to reproduce any of the experiments, as many details are missing. Will code be released?\\n\\n\\n###### Post-rebuttal update\\n\\nI am happy with the author's response to my concerns, and they have included corresponding discussions in their paper. Thus, I am improving my rating to recommend acceptance of this paper to ICRL2020.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents an approach for navigating and exploring in environments with dynamic and environmental hazards that combines geometric and semantic affordance information in a map used for path planning.\\n\\nOverall this paper is fairly well written. Results in a VizDoom testbed show favorable performance compared to both frontier and RL baselines, and the author's approach is more sample-ef\\ufb01cient and generalizable than RL-based approaches.\\n\\nI wouldn't consider any particular aspect of this paper to be that novel, but it is a nice combination of leveraging active self-supervised learning to generate spatial affordance information for fusion with a geometric planner.\\n\\nAs humans show the best performance on the tasks, it might be worth considering learning a policy from human demonstrations through an imitation learning approach.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to learn affordance maps: a method to judge whether a certain location is accessible. This is done by distilling a series of \\\"trial and error\\\" runs and the relation of a pixel in the image/depth plane to a corrdinate into a model.\\n\\nI like the idea and think the paper should be accepted. The idea to use trial and error (something I prefer to self-supervision, which is used differently in many contexts, I believe) to obtain a data set for learning a model is nice and very practical.\\n\\nSome concerns that I think should be adressed.\\n\\n- The term information gain is used wrongly. The entropy of class labels is not infogain. Infogain is the expected KL of the model posterior from the model prior. Please correct this. See [1, 2].\\n- *Learning* a model of the environment and using it for navigation/exploratin has also been tackled recently by [1]. I think the authors should draw connetions to that work.\\n- Self-supervision has recently been proposed by Lecun as a subsitute (of sorts) for unsupervised learning. What he means is that a part of the data is used to predict another part of the data. I have no hard feelings about the term, personally preferring unsupervised, but the authors should be aware of the name clash.\\n\\nI wonder how the authors envision to extend this method to real scenarios. The \\\"trial and error\\\" method is clearly not viable for robotics setups, as hazards are costly. It would be nice if the authors could give there perspective on things.\\n\\n[1] Depeweg et al, \\\"Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning\\\", Proceedings of the 35th International Conference on Machine Learning\\n[2] Mirchev et al, \\\"Approximate Bayesian Inference in Spatial Environments\\\" in proceedings of Robotics: Science and Systems XV.\"}"
]
} |
rkxMKerYwr | Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors | [
"Jiezhang Cao",
"Jincheng Li",
"Xiping Hu",
"Peilin Zhao",
"Mingkui Tan"
] | Deep neural networks (DNNs) have achieved unprecedented practical success in many applications.
However, how to interpret DNNs is still an open problem.
In particular, what do hidden layers behave is not clearly understood.
In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by ``monitoring" both across-layer and single-layer distribution evolution to some target distribution in the training. Here, the ``across-layer" and ``single-layer" considers the layer behavior \emph{along the depth} and a specific layer \emph{along training epochs}, respectively.
Relying on optimal transport theory, we employ the Wasserstein distance ($W$-distance) to measure the divergence between the layer distribution and the target distribution.
Theoretically, we prove that i) the $W$-distance of across layers to the target distribution tends to decrease along the depth. ii) the $W$-distance of a specific layer to the target distribution tends to decrease along training iterations. iii)
However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses helps the training of DNNs. Extensive experiments on real-world datasets justify our theoretical findings. | [
"Interpretability of DNNs",
"Wasserstein distance",
"Layer behavior"
] | Reject | https://openreview.net/pdf?id=rkxMKerYwr | https://openreview.net/forum?id=rkxMKerYwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5IzGXvslrr",
"rkgKoLL9jH",
"r1eMoS85iS",
"BJeibrI5iH",
"ByxCO489sH",
"BJe8i8JLqB",
"HkxW9dLptr",
"B1lfD3Ahtr",
"H1eDnHP7_r",
"Sye009qhwr",
"rkgUEKZ2PB",
"B1lIWeW3wH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment",
"comment",
"comment"
],
"note_created": [
1576798748846,
1573705376521,
1573705113529,
1573704963148,
1573704821639,
1572365982234,
1571805320773,
1571773530048,
1570104751007,
1569659606011,
1569622317946,
1569619966277
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2426/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2426/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2426/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2426/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2426/Authors"
],
[
"~pankaj_gupta1"
],
[
"~Cantona_ViVian1"
],
[
"~pankaj_gupta1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies the transfer of representations learned by deep neural networks across various datasets and tasks when the network is pre-trained on some dataset and subsequently fine-tuned on the target dataset. On the theoretical side the authors analyse two-layer fully connected networks. In an extensive empirical evaluation the authors argue that an appropriately pre-trained networks enable better loss landscapes (improved Lipschitzness). Understanding the transferability of representations is an important problem and the reviewers appreciated some aspects of the extensive empirical evaluation and the initial theoretical investigation. However, we feel that the manuscript needs a major revision and that there is not enough empirical evidence to support the stated conclusions. As a result, I will recommend rejecting this paper in the current form. Nevertheless, as the problem is extremely important I encourage the authors to improve the clarity and provide more convincing arguments towards the stated conclusions by addressing the issues raised during the discussion phase.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your valuable comments. We have carefully considered the four concerns and made the paper clearer in the revised paper. We sincerely hope you would be satisfied with the clarifications below.\\n\\nQ1. Concern on multi-label classification and Wasserstein distance\\n\\nThe setting of multi-label classification does motivate the use of Wasserstein distance. First, using Wasserstein distance is able to improve the performance of multi-label classification [5, 6]. Second, deep neural networks on the multi-label classification task still lack strong theoretical understanding. With the help of optimal transport theory, we are able to use Wasserstein distance to interpret deep neural networks via understanding layer behaviors.\\n\\nQ2. Necessity of teacher-student networks\\n\\nWe exploit the teacher-student framework to build our analysis as it is a flexible framework to analyze and understand neural networks [7, 8, 9]. Specifically, in our one-layer behavior analysis, this framework helps to understand the dynamics of a student network from the teacher network. In our multi-layer behavior analysis, this framework helps to study the ability of the student network to express distributions of the teacher network (Barron function). Therefore, the teacher-student analysis framework is important and necessary in our paper to understand deep neural networks. We believe our analysis framework would provide a different view of understanding and interpreting neural networks.\\n\\nQ3.\\tMore details of the regularization strength and p of Wasserstein distance\", \"regularization_strength_of_the_wasserstein_distance\": \"When the regularization strength $\\\\alpha$ is large enough, the entropic Wasserstein distance in Eqn. (17) coincides with the Wasserstein distance in Eqn. (2) [10]. In practice, we set $\\\\alpha=0.01$ to achieve balanced results. In addition, we choose $p=2$ in the experiments and all theoretical analysis.\\n\\nQ4.\\tMore details of $\\\\tilde{f}_i, f_i$ and their domains\\n\\nIn the second paragraph of Section 3, for all functions $\\\\tilde{f}_i, i=1, \\u2026, L$, they have different input and output domains. In contrast, for all functions $f_i, i=1, \\u2026, L$, they have the same input domain and the same output domain, because it feeds the same input and then outputs the label distribution to close to the ground-truth. We clarify Figure 1 (a) and make them clearer in the revised paper.\", \"reference\": \"[5] Charlie Frogner et al. Learning with a wasserstein loss. NeurIPS, 2015.\\n[6] Peng Zhao et al. Label distribution learning by optimal transport. IJCAI. 2018.\\n[7] Yuandong Tian. An analytical formula of population gradient for two-layered ReLU network and its applications in convergence and critical point analysis. ICML, 2016.\\n[8] Simon S. Du et al. When is a convolutional filter easy to learn? ICLR, 2018.\\n[9] Qiuyi Zhang et al. Electron-proton dynamics in deep learning. arxiv, 2017.\\n[10] Marco Cuturi. Sinkhorn Distances: lightspeed computation of optimal transport, NeurIPS, 2013.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your valuable comments. We conduct thorough repeated experiments in the revised paper, and we sincerely hope you would be satisfied with our following response on your concern over the consistency of experimental results.\\n\\nQ1. Consistency of experimental results\\n\\nThe experimental results are consistent with repeated experiments, which are shown in Figure 9 in Section F of Supplementary materials. In this experiment, we shuffle the data and then conduct three experiments with different partitioning. From Figure 9, different experiments consistently have the same decreasing tendency through the depth of a neural network.\", \"here_we_would_like_to_highlight_our_main_contributions_as_below\": \"1. We propose a unified teacher-student analysis method to explore both across-layer and single-layer behaviors.\\ni) Across-layer behaviors: The W-distance between the distribution of any layer and the target distribution decreases along the depth of a DNN.\\nii) Single-layer behaviors: For a specific layer, the W-distance between the distribution in an iteration and the target distribution decreases across the training iterations when introducing a loss in the layer.\\niii) We prove that a deep layer is not always better than a shallow layer for some samples (see Figure 5).\\n\\n2. We have provided extensive experiments to justify these findings.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your constructive comments.\\n\\nQ1. Advantage of the proposed method over information bottleneck and main contributions of our paper\\n\\nExisting studies [1, 2] using information bottleneck methods mainly analyze the dynamics of across different layers. However, it is hard for these methods to analyze the dynamics of a specific layer through different iterations. In contrast, our proposed method is able to analyze both single-layer and across-layer behaviors.\", \"we_highlight_the_main_contributions_as_follows\": \"1. We propose a unified teacher-student analysis method to explore both across-layer and single-layer behaviors.\\ni) Across-layer behaviors: The W-distance between the distribution of any layer and the target distribution decreases along the depth of a DNN.\\nii) Single-layer behaviors: For a specific layer, the W-distance between the distribution in an iteration and the target distribution decreases across the training iterations when introducing a loss in the layer.\\niii) We prove that a deep layer is not always better than a shallow layer for some samples (see Figure 5).\\n\\n2. We have provided extensive experiments to justify these findings. \\n \\n\\nQ2. Definition of the label distribution\\n\\nAs defined in [3], the label distribution can be defined as a probability distribution to cover a certain number of labels, representing the degree to which each label describes the instance, as shown in Figure 1 (c) of the revised paper. Because the label distribution is a probability distribution, its sum is equal to 1. The revised paper provides more intuitive examples to explain the definition of label distribution.\\n\\nDue to the convexity of cross-entropy loss, we derive the optimal label distribution for given features of every layer [4]. In this sense, the label distribution reflects the actual distribution of feature maps in a specific layer.\", \"reference\": \"[1] Naftali Tishby et al. Deep learning and the information bottleneck principle. IEEE ITW, 2015.\\n[2] Seojin Bang et al. Explaining a black-box using deep variational information bottleneck approach. arxiv, 2019.\\n[3] Xin Geng. Label Distribution Learning. KDD, 2016.\\n[4] Guillaume Alain, Yoshua Bengio. Understanding intermediate layers using linear classifier probes. arxiv, 2018.\"}",
"{\"title\": \"Response to AC and all reviewers\", \"comment\": \"Dear AC and reviewers,\\n\\nThank you very much for your constructive comments. In this paper, we propose a unified teacher-student analysis method to analyze both across-layer and single-layer behaviors of neural networks. Moreover, our theoretical findings help to improve the classification performance of multi-label learning tasks (see Table 1). We believe our results would provide a different view of understanding and interpreting neural networks.\\n\\nWe have updated a revised version of the paper. The changes have been highlighted as follows:\\n\\n1. We highlight the contributions of our paper on page 2.\\n2. We discuss some studies using information bottleneck methods in related work.\\n3. We define the label distribution in Figure 1 (c) and Section 3, and provide more intuitive examples in Figure 16 in Section I of Supplementary materials.\\n4. We conduct thorough repeated experiments to verify the consistency of performance in Figure 9 in Section F of Supplementary materials.\\n5. We explain the reasonability and necessity for the setting of multi-label classification in Section 4.\\n6. We explain the importance and necessity for the teacher-student networks in Section 5.\\n7. We give more details of $f_i$ and $\\\\tilde{f}_i$, and clarify Figure 1 (a) in the revised paper.\\n\\nThank you very much for your consideration.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a method to compute the distance of distribution of two layers in neural networks by using the label distribution mapping (e.g., Frogner et al., 2015). With the tool, authors could see how individual layers could related each other across-layer (along the depth) and single layer (training epoch).\\n\\nI believe that the contributions of this paper are week in analyzing individual layers across-layer since there are many extensive studies are conducted on information bottleneck methods with mutual information. I believe that those methods are better to analyze the dynamics of learning even without the additional label distribution mapping.\\n\\nHowever, authors of this paper presents a way to utilize the label distribution mapping to compare the distance of individual layers when an input image come as shown in Figure 5 which I believe the main contribution of this paper. \\n\\nThe (somehow artificial and ambiguous) term, label distribution is used several places before it is defined. Even in Section 3, the label distribution mapping is not clearly explained except for the description of FC+softmax. Thus, it would be better to clarify the definition. Also, it is not clear that the label distribution reflect the actual distribution of (nodes or feature maps) in a specific layer. It would be good to spend more space and resources (e.g., image and/or running examples) to explain the definition of label distribution.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors intuitively, and then analytically, explain the behavior in the hidden layers of deep convolutional networks and show how the behavior can be used to improve performance by \\\"early exiting.\\\"\\n\\nI give this paper a weak reject. I believe this paper does well by connecting the intuitive explanation with the proofs, and then by confirming their results through experimentation. I also applaud the authors for their rigorous explanation of the hyper-parameters and experimentation methods. However, from what I can tell, there was no cross-fold validation or even repeat trials with different partitioning to see whether the differences in performance were just random perturbations or a consistent effect. The increase in accuracy isn't large enough across experiments to allay my concerns.\\n\\nI think the authors have some very compelling work here, but the lack of a large difference in accuracy combined with insufficient testing methodology causes me to reject this paper... but only barely. I can be convinced otherwise with a compelling set of arguments.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper seeks to understand both across-layer and single-layer behavior within neural networks (i.e. layer behavior along the depth of a network, and behavior of a single layer along training epochs). Therefore, they resort to the optimal transport framework to compare predicted and target distributions. Theoretically, they show that the Wasserstein distance between predicted and target distributions is decreasing along the depth and for a single layer, along training iterations. They also give intuition on how this analysis can help the learning process in practice.\\n\\nThis paper gives an interesting contribution to the in-depth analysis of neural networks. However, some elements remain unclear:\\n\\n1.\\tThe setting of multi-label classification does not really motivate the use of measures.\\n2.\\tIt is unclear why the use of teacher/student networks are pertinent or necessary.\\n3.\\tThere is no detail on the regularization strength of the Wasserstein distance, or what p (in definition 1) is chosen either in the experiments or in the theorems.\\n4.\\tI believe it is understated that all \\\\tilde{f}_i have the same input and output domains (as well as h=h_i in figure 1a), which is restrictive and should have been made clearer. \\n\\n- Post rebuttal: I thank the authors for their response. On this basis, I am maintaining my weak reject rating.\"}",
"{\"comment\": \"Thanks for your helpful comments. LISA (Gupta et al, 2018) is a great work to explain deep neural networks by understanding the layer-wise semantic accumulation behavior. We will discuss and cite this work in the related work.\", \"title\": \"Discussions of the related work\"}",
"{\"comment\": \"Appreciate your question about relatedness. I understand the following connection of this work vs [1], though the latter is explanation-based method to study behavior of neural networks.\\n\\nAt test time, [1] studies the layer-wise semantic accumulation behavior, how a semantic is built given a sequence of words and propagated across hidden layers of RNN. Moreover, it detects and extracts salient textual patterns in building semantics. It is based on simply analyzing the label distribution at each of the hidden layers in a sequence classification task and studies the confidence of RNN for the target class, spanning across recurrent layers and thus, in accumulating semantics across hidden layers. \\n\\nHowever, this work explores how label distribution propagates from one layer to another in order to understand behaviors of different hidden layers in DNN.\", \"title\": \"Semantic Propagation in hidden layers\"}",
"{\"comment\": \"The submitted paper targets at understanding across-layer behavior during the learning process of DNNs. And the mentioned paper is about interpreting the predictions of RNNs. Why they are related?\", \"title\": \"I do not think there is a connection between the two works.\"}",
"{\"comment\": \"Please also include the following work in your related works section. It understands neural networks (especially RNNs) in explaining their judgements and semantic accumulation behavior.\\n\\nPankaj Gupta, Hinrich Sch\\u00fctze. LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation. In BlackboxNLP@EMNLP 2018.\", \"title\": \"Nice work. Include additional Reference\"}"
]
} |
S1eZYeHFDS | Deep Learning For Symbolic Mathematics | [
"Guillaume Lample",
"François Charton"
] | Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing these mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. | [
"symbolic",
"math",
"deep learning",
"transformers"
] | Accept (Spotlight) | https://openreview.net/pdf?id=S1eZYeHFDS | https://openreview.net/forum?id=S1eZYeHFDS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"35viKf4BF",
"SygTQ8Pqjr",
"rygDLq24jr",
"BylfbY3Esr",
"BJeaNOh4jr",
"SJgIgO3VoS",
"Hklgq824ir",
"HJlE_rhVsS",
"r1g8MHnVoS",
"H1ltQLJMoH",
"H1xysAfSqH",
"HJgxJL_CKS",
"SJgm1yNFFS",
"rkec-H1UKS",
"H1eE5hROOH",
"rygXFOtvOr",
"HkgfEuFwdB",
"SJeO4Wa7_H",
"SyeEzcnXdB",
"SJlw8Scm_H",
"B1xF7rqQdH",
"ryevZBcQ_r",
"BJxJSz9QOB",
"ryeOMfcmOB",
"r1g-G6UAPB",
"SkesvNU6wB",
"HkghyDvjwB",
"H1lpDxrivr",
"SylS2ep9vH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"comment",
"comment",
"comment"
],
"note_created": [
1576798748818,
1573709348627,
1573337679101,
1573337337970,
1573337141495,
1573337069966,
1573336711761,
1573336428077,
1573336333982,
1573152289251,
1572314774847,
1571878359631,
1571532506617,
1571316994444,
1570462859964,
1570375803441,
1570375721719,
1570128176342,
1570126348492,
1570116943437,
1570116897241,
1570116862582,
1570116150674,
1570116112400,
1569774857456,
1569707106745,
1569580771658,
1569570916934,
1569538221040
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2425/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"~Bartosz_Piotrowski1"
],
[
"~Forough_Arabshahi1"
],
[
"ICLR.cc/2020/Conference/Paper2425/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2425/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2425/AnonReviewer1"
],
[
"~Nick_Moran1"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"~Forough_Arabshahi1"
],
[
"~Nick_Moran1"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2425/Authors"
],
[
"~anima_anandkumar1"
],
[
"~S._Alireza_Golestaneh2"
],
[
"~Ronen_Tamari1"
],
[
"~Nestor_Demeure1"
],
[
"~A_B_C2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper presents a deep learning approach for tasks such as symbolic integration and solving differential equations.\\n\\nThe reviewers were positive and the paper has had extensive discussion, which we hope has been positive for the authors. \\n\\nWe look forward to seeing the engagement with this work at the conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks!\", \"comment\": \"Thank you for your detailed response. It helped me to understand the paper better.\"}",
"{\"title\": \"Updated version of the paper\", \"comment\": \"We thank the reviewers, and all participants in this discussion for their comments. They were extremely useful and helped us to improve the paper. To address them, we have been actively working on the paper since the initial submission. We just uploaded a new version with a lot of new results.\\n\\nWe replied to all reviewers and commenters individually. Below is a summary of the major changes between the updated version and the original submission:\\n\\n1) Data generation for function integration has been deeply modified. To address the concerns about generalization, we now consider not 1 but 3 different generators to create functions with their integrals. Section 3.1 has been entirely rewritten, and describes our 3 generators in detail. In Section 4, we show that our model achieves excellent in-distribution performance on all three samples, and discuss out of distribution generalization. We believe this is the correct way to address generalizability issues, in self-supervised settings where datasets are generated.\\n\\n2) A new section (Section E of the appendix) discusses generalization across datasets and studies differences between examples generated by our 3 generators.\\n\\n3) A new section (Section F of the appendix) also addresses the generalization concern, and shows that a model trained to integrate exclusively functions that a symbolic framework (SymPy) can integrate, is able at test time to integrate functions that the symbolic framework is not able to integrate. This means that the model was able to generalize beyond the set of functions integrable by SymPy which it was trained on.\\n\\n4) To address another concern about the timeouts, we added a new section (Section D of the appendix) experimenting with different timeouts for Mathematica. We show that the number of expressions on which Mathematica times out only represents a small fraction of the failure cases, and that Mathematica usually indicates that it cannot integrate the input equation before reaching the time limit. We also show that even in the ideal scenario where Mathematica would succeed on all equations where it times out, the difference in performance would remain small and this would not change the conclusions. We also conducted a test with Maple.\\n\\n5) We added a graph at the end of Section 2 showing number of expressions and trees for different numbers of operators and nodes.\\n\\n6) In Section 4, we added statistics about the data sets, such as the training set size, the average and maximum length of expressions, and the ratio between input and output lengths.\\n\\n7) At the end of the evaluation section, we clarified how we use beam search, and that unlike in machine translation, we do not return the single hypothesis with the highest score, but that we consider all hypotheses in the beam.\\n\\n8) In the appendix, we improved the algorithm for generating random expressions. The new algorithm produces the same distribution, but its derivation is clearer and it implementation cleaner.\\n\\n9) We removed the alternate generator for second order ODEs, which was ultimately not needed.\\n\\n10) At the end of the appendix, we added a page with examples of integrals generated by our three methods.\\n\\n\\nFinally, as many people have requested the code and datasets, we would like to confirm that we will release them after the review process.\"}",
"{\"title\": \"Generalization\", \"comment\": \"Thank you for your comment. We now address the generalization problem, please refer to the updated version of the paper (Sections 3.1, 4.4, E and F).\\n\\nA test set of 5000 is large enough to have a reliable estimate of the overall accuracy of our model. See Tables 4 and 5, where we evaluate our model on 500 and 5000 equations and obtain almost identical results.\\nAs already mentioned in the paper, the reason we considered 500 equations is because of the limited speed of the symbolic frameworks we considered. Besides, the difference of performance we observe between the models is already statistically significant for 500 equations.\\n\\nWe agree that tree-structured models are an interesting alternative to seq2seq models. But although they are a natural choice for classification tasks, using them to transform an expression into another is more challenging. People have tried in the past to use tree-structured models for sequence generation in NLP, but with limited success compared to seq2seq models which remain the natural choice. We leave the study of applying tree-structured models to function integration and differential equation solving to future work.\\n\\nWe actually found that our models were very stable, and that changing the architecture / learning rate scheduling had almost no impact on our results (please refer to our response to reviewer 1 for more details).\\n\\nAs mentioned in the comments below, we will release our code and datasets for reproducibility.\"}",
"{\"title\": \"Response to review #1 (2/2)\", \"comment\": \"===== Visualization and sparse transformers\\n\\nWe agree that visualizing the attention is very interesting, and could give insights on the way the model is actually operating. We tried to use the \\u201cbertviz\\u201d library to see whether some attention heads focus on specific sub-expressions in input equations. Unfortunately, we did not see any specific patterns in our visualizations. We also quickly tried something in the spirit to Malaviya et al. to constrain the attention of our model, hoping that visualization would be easier. Unlike them, we used a naive approach where we simply set to 0 the attention scores that are not in the top-k highest scores. We found that this constraint hurts the performance of the model for small values of k, and does not make visualization much easier because of the skip-connections in the transformer. We did not have time to investigate the visualization further, but will definitely consider it in future work.\\n\\n===== Beam search\\n\\nThe beam search procedure we use is the one described in Sutskever et al, 2014, which we now cite along Koehn 2004. We added another paragraph at the end of the Evaluation section to clarify how we use the beam search.\\n\\n===== Mathematica comparison / timeouts\\n\\nAfter submission, we conducted more precise tests on Mathematica, over the same test set, and with the same trained model.\\n\\nFor a given timeout delay, there are three possible outcomes:\\n1- Mathematica finds a solution before it times out\\n2- Mathematica times out without a solution\\n3- Mathematica returns without a solution before time out (either by returning the input, or a solution including an indefinite integral)\\n\\nIn the submission, we considered 2 and 3 as failures. In the experiments, we used one of several ways to compute integrals in Mathematica: function DSolve, which can be used both for integration and differential equations. Upon further investigation, we noticed that function Integrate runs faster, and therefore achieves better results for the same timeout value. We updated the scores of Mathematica in the paper, and added a table in the appendix (Table 7, Section D) with the percentage of outcomes (success, timeout, failure) for different timeout delays, computed using the faster Integrate.\\n\\nAs the timeout delay increases, timeouts are less frequent, and a bound on infinite time success rate can be calculated. This suggests a success rate between 85% and 86%. This table also justifies 30s as a practical time out value. This reduces the gap between Mathematica and our model (as we had suggested in our paper), but a significant difference remains and this does not change the conclusions.\\n\\n===== Training curves\\nWe agree that training curves would be helpful and interesting. We will add some in the next revised version of the paper. Thank you for your suggestion.\"}",
"{\"title\": \"Response to review #1 (1/2)\", \"comment\": \"Thank you very much for your review and comments. We address your questions in order.\\n\\nPART 1/2\\n\\n===== Hype / Overclaiming\\n\\nWe had no control over the discussions on the Internet prior to the review, and took no part in them, nor did we encourage them by communicating on our work or publishing on arXiv before review. This is a side effect of the open review process, together with the very interesting adversarial discussions we just had.\\n\\nIn the paper, we tried to be prudent and not overclaim, by explaining that we work with a dataset generated by our model and use standard differential equation solvers that may work better on different sets of equations. We also mention, at the end of paragraph 4.5 \\u201dWhen comparing with Matlab and Mathematica, we work from a data set generated for our model and use their standard differential equation solvers. Different sets of equations, and advanced techniques for solving them (e.g. transforming them before introducing them in the solver) would probably result in smaller performance gaps.\\u201d\\n\\n===== On sqrt(-2) and log(0), and the \\u201ccleaning\\u201d of some formulas\\n\\nThe main reason why we eliminated such constants (and very large values such as exp(exp(exp(5))) is that they made life difficult for SymPy and NumPy, which we use to test and verify our results. They tended to cause unwanted (and sometimes very difficult to catch) exceptions, and even server crashes. Since our model works on symbols; and does not care for actual numeric values, these constants (as opposed to functions of variable x) had no impact on actual integration or equation solving, they could have been replaced by anything. \\n\\nOperating in the complex domain is also possible. We took the decision to discard complex equations arbitrarily, but we could easily add them back.\\n\\nHowever, on a deeper level, and in the specific case of symbolic integration, we do not think that adding infinity or operating in the complex domain would be an improvement. The objective of symbolic integration consists in finding a solution to an indefinite integral without adding new symbols, and in the smallest possible algebraic extension of the original field (here, an extension of Q since our constants are integers). We believe this is true for other tasks of symbolic mathematics.\\n\\n===== On the two examples you provide\\n\\n\\\"These results are surprising given the incapacity of neural models to perform simpler tasks like addition and multiplication\\\"\\n-> The difficulty to perform such calculations with neural networks is documented (see the reference in paragraph 2 of our introduction). We actually tested transformers on such problems (this was the original objective of our project), and were surprised to find that integration, a much more difficult task from a human point of view, seemed much easier for our model. We will clarify this.\\n\\n\\\"This suggest (sic) that some deeper understanding of mathematics has been achieved by the model.\\\"\\n-> We removed this sentence from the paper, but we consider that recovering equivalent expressions (i.e. alternative solutions of the problems) through beam search, is a very important finding. As shown in Table 4 (Table 6 in the updated version of the paper), the model consistently recovers correct solutions that have very different representations. This is very surprising, and does suggest something important is at work. We have no explanation to offer so far, but we believe it is a very important observation.\\n\\n===== Code / datasets\\n\\nYes, as promised, we will make our code and datasets public after the review process.\\n\\n===== Network architecture\\n\\nWe decided to consider the same transformer configuration as Vaswani et al., i.e. 6 layers and a dimensionality of 512, with 8 heads. We tried to increase the number of layers, the number of heads, dimensionality, but did not observe significant improvements with larger models. On the other hand, we found that very small models (c.f. our response to Forough) still perform well on function integration, even when they are only composed of 2 layers of dimension 128. Our observation was that transformers perform well on the considered tasks, and are also very robust to the choice of hyper-parameters, unlike what people observed in machine translation. Machine translation systems typically benefit from advanced learning rate schedulers (either with linear or cosine decay, with many hyper-parameters). These schedulers did not bring any improvements in our case, and we simply use a constant learning rate of 10^(-4).\"}",
"{\"title\": \"Response to review #2\", \"comment\": \"Thank you very much for your review and comments. We address them in turn:\\n\\n==== \\u201cI presume that when the authors compare their learned solvers with Mathematica and Matlab, they used a dataset generated by their method.\\u201d\", \"this_is_correct\": \"we test our model and Mathematica on a held out sample from the generated sample (and mention at the end of paragraph 4.5 that this creates a favorable situation for our model).\\n\\nSince the submission, we tried to experiment on integration for samples generated with different methods. More precisely, we generated \\u201cforward\\u201d samples of random functions that SymPy knows how to integrate. This gives a good approximation of what Computer Algebras are good for. Examination of the samples shows that in backward samples, derivatives tend to be longer than primitives, whereas the opposite holds for forward samples. Unsurprisingly, a model trained on backward samples performs poorly on forward examples. But a forward-trained model achieves the same performance on forward data as a backward-trained model on backward data: this suggests that the performance is linked to data generation, and we actually observe that a model trained on the combination of backward and forward data achieves a good performance on all samples. These new results are in the updated version of the paper.\\n\\n===== p3: Why is it important to have a generator that produces the four expression trees in p3 with equal or almost equal probabilities? Do you have any semi-formal or informal justification that the distribution of such a generator better matches the kind of expressions arising in the real world?\\n\\nWe have no idea of the actual distribution of expressions \\u201cin the wild\\u201d (provided this has a meaning). Since we have no reason to consider an expression more relevant than another, we decided to sample all of them with the same probability. Since there is a one to one mapping from expressions to decorated trees (thanks to the prefix notation), we want to sample them uniformly, which means that all trees have to be sampled with the same probability.\\n\\n===== \\\"If this equation can be solved in c1\\\", p5: How realistic is this assumption?\\n\\nFormally, the function F(x,y,c1) is the equation of the level curves of the function f, which we originally generated. The equation dF/dx = 0 corresponds to the gradient of F along x. Solving this equation in c1 amounts to finding an equation of the level curves of the gradient. In practice, we found that we can solve in c1 about 50% of the time. If we cannot, we simply discard the initial expression.\\n\\n===== p5: If you have a thought or an observation on the impact of each of the data-cleaning steps in Section 3.4, I suggest you to share this in the paper.\\n\\nEquation simplification, like the use of small integer coefficients in expressions, limits the need for our model to carry out (and learn) arithmetic simplification in addition to the main task (integration of equation solving). This will reduce, or rather, bias, the generated expressions, by reducing the number of constants (i.e. leaves different from \\u2018x\\u2019 in the expression tree), and eliminating certain sequences of operators (exp(log()), sin(arcsin()), and so on. We consider it as a way to improve learning by focusing on the task at hand.\\nCoefficient simplification is a trick of our method to generate differential equations. This step makes the elimination of constants c1 and c2 easier, but the generated equations and solutions remain the same.\\nInvalid expression removal allows us to avoid exceptions when evaluating the functions. Since they only concern constants, they have very little impact on the problem. An alternative would be to replace the invalid sub-expressions by valid ones (see also our reply to reviewer 1 on this point).\\n\\n===== p6: Why did you remove expressions with more than 512 tokens?\\n\\nWe found that with very large expressions, the transformer model is subject to out of memory errors, which requires to use a smaller batch size at training time. To keep a large batch size (and to make training faster), we set this limit of 512 tokens. Overall, this is only discards a tiny fraction of the generated expressions.\\n\\n==== p7: Would you put the reminder of the size of the training set in Section 4.4? It only mentions that of the test set currently.\\n\\nYes, we added a new Table (table 1 in the updated version of the paper) with statistics about our datasets. Thank you for the suggestion.\"}",
"{\"title\": \"Response to review #3\", \"comment\": \"Thank you very much for your review and your comments. We address them in the updated version of the paper.\\n\\nIn particular, we added a new table (Table 1 of the updated version) with statistics about the considered training sets, and the length of expressions. We also added a figure (Figure 1) that represents the number of trees and expressions for different numbers of operators and leaves.\\n\\nAt the end of Section 4.3, we clarified our use of beam search, and explained how it differs from what people usually do in machine translation (i.e. only returning the hypothesis of the beam with the highest score).\"}",
"{\"title\": \"beam search\", \"comment\": \"You are absolutely right. Thank you for your suggestion. Encouraging diversity would probably allow to model to explore a wider set of candidates, and increase the probability to find a good solution. Actually, maybe a simple but effective solution could be to sample solutions instead of using a beam search. This is something we will investigate in the future.\"}",
"{\"title\": \"Generalization, similar work, other issues\", \"comment\": \"Thank you for an interesting work! I have several comments, though.\\n\\n(1) Generalization. I very much agree with Forough Arabshahi that assessing generalization of the model is a crucial issue here. In my opinion it requires much more careful analysis. It could happen that the examples in the test set are in some way too similar to some (or many) instances in the training set, and the model does more memorization than the claimed generalization. I'm not saying this similarity is trivial, maybe it is some nuanced leak in the data, but it requires our attention, and this analysis can be interesting on its own. Which examples were problematic for the model? What characterizes the easy ones? Also, why did you use so small test set compared to 40M training set? Why accuracy is measured on 5000 examples but comparison with Mathematica is done only for 500 examples? At first glance it's a bit dubious. Releasing the data sets would be very appreciated in the context of these concerns.\\n\\n(2) Similar work. Recently we did experiments which were in the same spirit -- but for easier and smaller data. It was applying out-of-the-box seq2seq models for (a) normalizing polynomials (of varied complexity, generated synthetically) and (b) for learning rewriting steps extracted from automated proofs. (The work was presented at AITP'19 [1] and GNN workshop at ICML'19 [2]). In our experiments we also noticed, that prefix/Polish notation is helpful for applying NMT.\\n\\nThere is also another, earlier, work about NMT in symbolic setting where a problem being solved is translating informal LaTeX to formal math [3].\\n\\n(3) TreeNNs. I understand the motivation for using classical seq2seq -- faster to train, performance is great. I believe, though, that research-wise it's important to not lose the focus on tree neural nets -- tree structure is intrinsic to symbolic expressions and its extraction is for free. TreeNNs can \\\"directly comprehend\\\" this tree structure. I believe this is the way to provide the right domain-specific architectural bias (like 2d convolutions is the right bias for images) and to achieve much more robust/controlled/explainable generalization. I hope to see advances in tree-based architectures for symbolic problems (even if these models are, initially, inferior efficiency/performance-wise.) \\n\\n(4) Technical details. I would like to see more details such as: for how many epochs did you train the network, what hardware did you use for the evaluation with time-limited Mathematica, how the hyperparameters of the model were found (Transformer tends to be fragile with respect to hyperparameters.) This would be beneficial for increasing reproducibility.\\n\\n[1] Piotrowski, Brown, Urban, Kaliszyk: Can Neural Networks Learn Symbolic Rewriting?, AITP 2019, http://aitp-conference.org/2019/aitp19-proceedings.pdf\\n[2] (title as above), GNN workshop at ICML 2019, https://graphreason.github.io/papers/40.pdf\\n[3] Qingxiang Wang, Cezary Kaliszyk, Josef Urban:\\nFirst Experiments with Neural Translation of Informal to Formal Mathematics. CICM 2018\"}",
"{\"title\": \"Generalizability\", \"comment\": \"Thanks a lot for your response!\\n\\nThe main point of my previous comment, which is unanswered, was that the *test data* does not measure the generalizability of the model, since the test set has the same complexity as the training set. An example of a test set that can potentially measure generalizability performance is one containing deeper expressions than seen in training. There could also be other ways of measuring generalizability depending on the problem studied. You can e.g. take a look at the tests done in the Neural Turing Machine (NTM) [1], where they test the model's generalizability on the copy task by seeing the performance of NTM on sequences that are longer than the sequences in the train set. You can see there, too, that the model is doing very well on held-out data of the same complexity as train data. The interesting thing is to see how well the model generalizes to more complex datasets.\\n\\nIn fact, your observation that a simple 2 layer network can achieve such a high accuracy on the held-out set could be a red flag that might mean the test set is too simple for measuring the model's true generalizability performance.\\n\\nIf you are proposing a symbolic solver, then yes SOTA is a computer algebra system. But when you are proposing a neuro-symbolic solver, SOTA is the other neuro-symbolic solvers; although it is always nice to see comparisons with computer algebra systems. Specifically, there was a claim in your paper that tree-structured models are not needed. It is always good to back-up claims using experiments (and I note again that if the test data is of the same complexity as the train data, like it is here, this claim is probably true to some extent.)\\n\\n[1] Graves, Alex, Greg Wayne, and Ivo Danihelka. \\\"Neural turing machines.\\\" arXiv preprint arXiv:1410.5401 (2014).\\n\\nP.S. Thanks for pointing us to the typo! We did not claim that we proposed a method for generating *differential equations*.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"In this paper, the authors propose a method for generating two types of symbolic mathematics problems, integration and differential equations, and their solutions. The purpose of the method is to generate datasets for training transformer neural networks that solve integration and differential-equation problems. The authors note that while solving these problems is very difficult, generating solutions first and corresponding problems next automatically is feasible, and their method realizes this observation. The authors report that transformer networks trained on the synthetically generated solution-problem pairs outperform existing symbolic solvers for integration and differential equation.\", \"Here are the reasons that I like the paper. The observation that solving a symbolic mathematics problem is often a pattern matching process is interesting. It is surprising to know that a transformer network designed to translated the generating problem-solution pairs backward (from problem to solution) works better than the solvers in Mathematica and Matlab. Also, I like nice cute tricks used in the authors' method for generating solution-problem pairs, such as the syntactic condition on a possible position of some constant. The paper is overall clearly written.\", \"I presume that when the authors compare their learned solvers with Mathematica and Matlab, they used a dataset generated by their method. I feel that this comparison is somewhat unfair, although it still impresses me that even for this dataset, the authors' solvers beat Mathematica and Matlab. I suggest to try at least one more experiment on a dataset not generated by the authors' method (integration and differential equation problems from math textbooks or other sources) if possible.\", \"p3: Why is it important to have a generator that produces the four expression trees in p3 with equal or almost equal probabilities? Do you have any semi-formal or informal justification that the distribution of such a generator better matches the kind of expressions arising in the real world?\", \"p4: f(x)/x)) ===> f(x)/x)\", \"\\\"If this equation can be solved in c1\\\", p5: How realistic is this assumption?\", \"p5: 1/2 e^x(...) ===> 0 = 1/2 e^x(...)\", \"p5: If you have a thought or an observation on the impact of each of the data-cleaning steps in Section 3.4, I suggest you to share this in the paper.\", \"p6: Why did you remove expressions with more than 512 tokens?\", \"p6: compare to ===> compared to\", \"p7: Would you put the reminder of the size of the training set in Section 4.4? It only mentions that of the test set currently.\", \"p8: 1-(4x^2 ===> (1-(4x^2\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors use a Transformer neural network, originally architected for the purpose of language translation, to solve nontrivial mathematical equations, specifically integrals, first-order differential equations, and second-order differential equations. They also developed rigorous methods for sampling from a large space of relevant equations, which is critical for assembling the type of dataset needed for training such a data-intensive model.\\n\\nBoth the philosophical question posed by the paper (i.e. can neural networks designed for natural language sequence-to-sequence mappings be meaningfully applied to symbolic mathematics) and the resulting answer (i.e. yes, and such a neural network outperforms SOTA commercially-available systems) are interested in their own right, and together make a strong case for paper acceptance.\", \"details_appearing_in_the_openreview_comments_which_should_be_explicitly_specified_in_the_paper_before_publication\": \"1) How large was the generated training set (40M), and how does this compare to the space of all equations under consideration (1e34).\\n2) The authors employ beam search in a non-standard manner, where they check for appearance of the equation solution among all of the generated candidates, rather than selecting the top-1. The fact that the reported accuracy with width-10 and width-50 beam searches are in effect measuring top-10 and top-50 accuracy should be clearly stated.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"It is rather interesting for a humble academic to review this paper. It already has a discussion, which I find very valuable, and many tweets and social media exposure and endorsements. It is onerous to review in this setting.\\n\\nThe paper makes a valuable contribution. The adversarial discussions in this website and the unhelpful hype can in this case be addressed to some extent by the authors. I will start with discussing this. Clearly, the title is too broad. This is not deep learning for symbolic mathematics. In no way does this paper address the essence of what is understood by \\\"symbolic mathematics\\\". What the authors address is mapping sequences of discrete quantities to other sequences of discrete quantities. The sequences in this paper correspond to function-integral i/o sequences, and 1st/2nd ODEs-function i/o sequences. I will leave it to the authors to come up with a more informative title, but something like deep learning or transformers for symbolic (1d) integration and simple ODEs with be far more accurate.\\n\\nTo hammer this point, note that Section 3 discusses removing \\\"invalid\\\" expressions: log(0) or sqrt(-2). However, it is the manipulation of infinity and imaginary numbers that could be considered to be one of the greatest achievements of symbolic mathematics over the last couple of hundred years. It is reasonable to expect neural nets to do this one day, because humans can, but this should come with results. It's too early to make the claim in the paper title.\\n\\nSentences such as \\\"This suggest (sic) that some deeper understanding of mathematics has been achieved by the model.\\\" and \\\"These results are surprising given the incapacity of neural models to perform simpler tasks ...\\\" are speculative, potentially inaccurate and likely to increase hype. This hype is not needed.\\n\\nHype and over-claiming aside, I did enjoy reading this paper. The public commenters have already asked important questions about methodology and related work on neural programming that the authors have addressed in comments. I look forward to these being incorporated in the revised pdf.\\n\\nA big part of the paper is about generating the datasets, and I therefore sympathise with the comment about requesting either a dataset release or the generating code. I see no obvious ethical concerns in this case, and the authors have already kindly offered to do this. This is a commendable and important service to our community and for this alone I would be inclined to vote for acceptance at ICLR.\\n\\nThe paper is clear and well written. However (i) it would be good to show several examples of input and output sequences (as done already in this website) and (ii) the Experiments section needs work. I'll expand on this next.\\n\\nThe seq2seq transformer with 8 heads, 6 layers and dimensionality 512 is a sensible choice. The authors should however explain why they expect this architecture to be able to map the sequences they adopt. That is, it is well known that a deep neural network is just a skeleton for an algorithm. By estimating the parameters, we are coming up with (fitting) the algorithm for the given datasets. What is the resulting algorithm? Why are 6 layers enough? Here some visualization would be helpful. See for example https://arxiv.org/pdf/1904.02679.pdf and https://arxiv.org/pdf/1906.04341.pdf For greater understanding of the problem, it may be useful to also try sparse transformers eg https://arxiv.org/abs/1805.08241\\n\\nBeam search is a crucial component of the current solution. However, the authors simply cite Koehn 2004 for this. First, that work used language models to compute probabilities for beam search. I assume no language models are used in this case. What I'm getting to is that there are not enough details about the beam search in this paper. The authors should include pseudocode for the beam search and give a few examples. The paper (even better thesis) of Koehn is a good template for what should be included. This is important and should be explained. \\n\\nFor Mathematica, it would be useful to state it does other things and has not been optimized for the two tasks addressed in this paper only. It would also be useful, now that you have more time, to run it for a week or two and get answers not only for 30s but also for 60s. How often does it take longer than 30s? How do you score it then?\\n\\nPlease do include train and test curves. This would be helpful too. I will of course consider revising my score once the paper is updated. \\n\\nThanks for constructing this dataset and writing this paper. It is very interesting and promising.\"}",
"{\"comment\": \"Thank you for your reply. That all makes perfect sense.\\n\\nIf the value of a wider beam search lies more in providing more plausible hypotheses than in merely maximizing the likelihood of the top output, I wonder if it might be possible to further improve accuracy using a different beam search heuristic. For example, by trying to encourage a diversity of candidates rather than merely the top-k most probable.\", \"title\": \"re: Two Points of Clarification\"}",
"{\"comment\": \"Thank you for your questions!\\n\\n1) The model has the same input and output space. A number like 14 is represented as \\\"[INT+ ; 1 ; 4]\\\" (i.e. by 3 tokens). The differential equation \\\"y'-100=0\\\" is represented as \\\"[SUB ; Y' ; INT+ ; 1 ; 0 ; 0]\\\" and the output will be \\\"[ADD ; MUL ; INT+ ; 1 ; 0 ; 0 ; x ; c]\\\" for \\\"100x + c\\\". So the model can receive and generate arbitrary integers. What we meant by the {-5 .. 5} generation range, is that in the initial representation of trees (before simplification), their leaves only have integer values in {-5 .. 5}. However, it is possible to have an initial expression like \\\"y'-5*5*4=0\\\" that will be simplified to \\\"y'-100=0\\\". This is why you can see examples in the paper with integers larger than 5. Adding this restriction in the generation allows us to reduce the size of the problem space, and to avoid having too many expressions with huge integers in the training set (these expressions can always be generated, but are less likely, and we found it useful as equations with huge integers are usually not very interesting or difficult to solve).\\n\\n2) We actually do the later approach. In machine translation, the former approach (i.e. taking the most probable of the beam) makes indeed more sense as there is no clear way to verify the correctness of the translation. But in our case, since we can quickly verify a solution by plugging it into the equation it has to verify, we do the second approach and consider all hypotheses in the beam until we find a valid one. Evaluating how often the single most probable output is correct is interesting, and we did not try it before. We just tried for first order differential equations, and found that using a beam size of 10 and testing only the best hypothesis slightly improves the performance, but not by much (about 0.5% over beam size 1), which suggests that it is important at test time to explore more than one option.\", \"title\": \"re: Two Points of Clarification\"}",
"{\"comment\": \"Thank you for your comment.\", \"you_write\": \"\\\"of course if a model is over-saturated with data and only tested on data from the same distribution and domain, one will not be able to assess whether the model is just memorizing the data or is it actually learning to do something interesting\\\". Although this is true for small problem spaces, it will not happen here. As we show in Section B of the appendix, there are over 1e11 expressions with five internal nodes, 1e23 with ten internal nodes, and 1e34 with 15 internal node. We use a training set of 4e7 equations with up to 15 internal nodes. Over-saturation with data in that setting is simply out of the question.\\n\\nTo address your concern, we ran an additional experiment with a small model composed of 2 layers of dimension 128. It is clearly impossible for such a model to memorize a training set of 40M equations. With this model, we obtain an accuracy of 91.0% on a holdout test set, and 95.6% using a beam search of size 10. On a small subset of the training set, our model obtains the same accuracy, which shows that there was no overfitting, and that the model was properly able to generalize beyond the training set.\\n\\nWe agree that it is important to compare with the SOTA. However, the SOTA in symbolic computation is held by computer algebra systems like Matlab and Mathematica, and not by neural tree-structured models. This is why we compare against these computer algebra systems. In fact, neural models (tree-structured or not) have never been tested on the tasks of function integration or differential equation solving (and not checking) before, so they cannot be the SOTA here. Tree architectures are discussed in the related work section of our paper. But as we explain, in mathematics they have mostly been used for arithmetic calculations (including logic) and for classification. This is a very different problem than what we are trying to solve.\\n\\nYou write \\\"Data of arbitrary size and depth can be generated using any reasonable automated data generation method.\\\". Generating data of arbitrary size is not a trivial problem. The method you propose amounts to performing local random changes on a small set of mathematical expressions gathered from Wikipedia. We believe this is not a satisfactory method. As mentioned in our previous response, such a method will inevitably generate biased equations with expressions centered around initial equations. In our case, we propose an elaborated technique to generate unbiased expressions (Section C of the appendix) where all trees have the same probability of being generated.\\n\\nBesides, the general form for a n-th order differential equation in your paper is wrong: what you gave is the form for linear differential equations, not the general form. As a result, your approach can only generate linear differential equations which is again a simpler problem than the general case. Our approach (Sections 3.2, 3.3, C and D) presents a way to generate arbitrary differential equations of the first and second order, and to the best of our knowledge, such an approach has never been proposed before, in the machine learning or any other community.\", \"title\": \"regarding generalization\"}",
"{\"comment\": \"Thank you for your response.\\n\\nUsing equations of a limited depth for training is indeed not a limitation of the data generation and rather is for the purpose of testing the model's generalizability performance to higher complexity beyond training data. Data of arbitrary size and depth can be generated using any reasonable automated data generation method. Of course if a model is over-saturated with data and only tested on data from the same distribution and domain, one will not be able to assess whether the model is just memorizing the data or is it actually learning to do something interesting. \\n\\nMoreover as shown in our paper and also other papers [1] (it seems like this reference was also not cited), [2] tree-structured models are the state of the art for symbolic math and logic and it is good practice to compare the performance of your proposed model with the state-of-the-art and back-up the claim that there is no need for a tree-structured model through experiments. In fact all the mentioned papers compare tree structured models against seq2seq models and show that tree-structured models outperform them (it might be worth mentioning that all models including seq2seq will be near perfect on data that is similar to training data). It is mentioned in your paper that you are focusing specifically on models used for NLP, however, in natural language the tree-structure is not inherent to the data itself and should be extracted using an external parser but in mathematics and logic the tree-structure is inherent to the data and ignoring it results in a loss in generalization performance. Vanilla seq2seq models might perfectly memorize the data but they will suffer in performance once the data becomes more complex (and if this is not the case, it will be nice to actually show it). Therefore, testing on larger trees or more complex datasets is not in any case only for problems that have access to small datasets, rather it is a test of model's generalizability.\\n\\n\\n[1] Richard Evans, David Saxton, David Amos, Pushmeet Kohli, Edward Grefenstette \\\"Can Neural Networks Understand Logical Entailment?\\\", ICLR 2018\\n[2] Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, and Charles Sutton. \\\"Learning continuous semantic representations of symbolic expressions.\\\", ICML 2017\", \"title\": \"generalizability concerns\"}",
"{\"comment\": \"Very interesting work, I have a few points which I feel are a bit unclear in the current version.\\n\\n1) What is the space of output tokens that the model can emit? Section 4.1 describes a set of tokens used to create the dataset, but this is restricted to numeric values in the range {-5,...,5}. Presumably a similar restriction does not apply to the outputs of the model, given the examples with a '9' in Table 4. In table 3, we see a solution with '14' as one the scalar values. Is this emitted as a single token, as a concatenation of '1' and '4', or as an expression like '7 * 2' which is then simplified?\\n\\n2) In sections 4.4 and 4.5, is accuracy calculated by checking whether the single most probable output found by beam search is correct, or if any of the top n outputs are correct? The former seems like the natural way to evaluate the model, but this passage from section 6 seems to suggest that it may be the latter: \\\"However, proposed hypotheses are sometimes incorrect, and considering multiple beam hypotheses is often necessary to obtain a valid solution. The validity of a solution itself is not provided by the model, but by an external symbolic framework (Meurer et al., 2017).\\\" If the latter, how often is the single most probable output correct?\", \"title\": \"Two Points of Clarification\"}",
"{\"comment\": \"Thank you for the reference! We are considering testing on alternative datasets, and the ones provided by Rubi seem indeed interesting as they come from a different distribution.\", \"title\": \"yes\"}",
"{\"comment\": \"There was indeed an error in one of the trees, thank you for spotting it! We will fix it in the revised version of the paper.\", \"title\": \"indeed\"}",
"{\"comment\": \"Thank you for your comment!\\n\\nIn practice, we use 40M equations for each task. We will add details about our datasets (number of equations, average number of nodes, operators, etc.) in the updated version of the paper.\\n\\nWhat we observe during training is that with smaller training sets the training accuracy is better (i.e. it is easier for the model to overfit on small training sets), but the model does not generalize as well and test accuracy is worse (which is typically what we observe in machine translation).\", \"title\": \"dataset size\"}",
"{\"comment\": \"Thank you for your message! Yes, we are planning to release the code after the review process.\", \"title\": \"yes\"}",
"{\"comment\": \"Thank you for these references. We will add them in the updated version of the paper.\\n\\nHowever, we respectfully disagree with your statements: the methodology and the tasks we tackle in our paper are very different from what you propose.\\n\\nFirst, we are working on very different tasks. You present an approach to check that a given function is a valid solution of a differential equation. This is a binary classification task, which is arguably a much easier problem than actually generating the solution from scratch, like we do.\\n\\nSecond, your approach amounts to generating data by performing local random changes on a small set of mathematical expressions gathered from Wikipedia. An issue with this approach is that the resulting problem space is very localized and biased around the initial equations. In our case, we propose a sophisticated approach to generate random equations from scratch. Our generative process ensures that all trees are generated with the same probability over a very large space (Section 2, B and C of the appendix). This approach allows us to generate arbitrarily large expressions in a uniform way. We train and evaluate our model on expressions of up to 300 internal nodes, while your paper only considers equations with up to 15 internal nodes. \\n\\nThe dataset in your paper is composed of 7000 differential equations, while our approach allows us to generate datasets that are several orders of magnitude larger. We train our models with 40000000 expressions, and could potentially use a lot more.\\n\\nGeneralization to larger trees is important when the training set is restricted to small equations. We have no such restriction since we can generate equations of arbitrary depths. In our case, the generalization problem lies in the size of the problem space (Section B).\\n\\nThe majority of studies in the field propose complex and dedicated architectures (like Tree-LSTMs) that are typically much slower and only applied to small datasets. One of the messages in our paper is that vanilla seq2seq models perform well on symbolic mathematics given enough data, and that dedicated architectures are not necessary. However, generating large datasets of differential equations is the real challenge, which was not addressed by previous works.\", \"title\": \"Previous work does not solve differential equations, only checks them, and does not present techniques to generate ODE datasets\"}",
"{\"comment\": \"This paper misses important papers that overlap significantly with the work.\", \"the_authors_need_to_provide_credit_where_due_and_closely_compare_this_with_the_following_works\": \"\", \"https\": \"//uclmr.github.io/nampi/extended_abstracts/arabshahi.pdf\\n\\nI don't see any new methodology beyond papers I mentioned above. In fact, they do worse: they don't do any extrapolation (testing beyond depth they trained on), and they only limited to symbolic evaluation.\", \"title\": \"Previous work does a better job and this is completely missed\"}",
"{\"comment\": \"Such an interesting work!\\nCan the code/implementation be available?\", \"title\": \"Can the code be available?\"}",
"{\"comment\": \"Interesting paper!\\n\\nMaybe I missed this somehow- what are the sizes of the training sets?\\n\\nAlso, it would be interesting to see train/test performance curves.\", \"title\": \"Training set details?\"}",
"{\"comment\": \"It would be interesting to include a test suite that was not generated using the same techniques as the training set to confirm that the method is able to generalize out of its training set.\", \"a_possibility_would_be_the_rubi_integration_suite_by_albert_rich\": \"https://rulebasedintegration.org/\", \"title\": \"Add independent test suite\"}",
"{\"comment\": \"The third tree on the furthest right is incorrect and does not match the equation presented in the text\", \"title\": \"Error in figure for expressions as trees\"}"
]
} |
HyxWteSFwS | Deep Interaction Processes for Time-Evolving Graphs | [
"xiaofu chang",
"jianfeng wen",
"xuqin liu",
"yanming fang",
"le song",
"yuan qi"
] | Time-evolving graphs are ubiquitous such as online transactions on an e-commerce platform and user interactions on social networks. While neural approaches have been proposed for graph modeling, most of them focus on static graphs. In this paper we present a principled deep neural approach that models continuous time-evolving graphs at multiple time resolutions based on a temporal point process framework. To model the dependency between latent dynamic representations of each node, we define a mixture of temporal cascades in which a node's neural representation depends on not only this node's previous representations but also the previous representations of related nodes that have interacted with this node. We generalize LSTM on this temporal cascade mixture and introduce novel time gates to model time intervals between interactions. Furthermore, we introduce a selection mechanism that gives important nodes large influence in both $k-$hop subgraphs of nodes in an interaction. To capture temporal dependency at multiple time-resolutions, we stack our neural representations in several layers and fuse them based on attention. Based on the temporal point process framework, our approach can naturally handle growth (and shrinkage) of graph nodes and interactions, making it inductive. Experimental results on interaction prediction and classification tasks -- including a real-world financial application -- illustrate the effectiveness of the time gate, the selection and attention mechanisms of our approach, as well as its
superior performance over the alternative approaches. | [
"deep temporal point process",
"multiple time resolutions",
"dynamic continuous time-evolving graph",
"anti-fraud detection"
] | Reject | https://openreview.net/pdf?id=HyxWteSFwS | https://openreview.net/forum?id=HyxWteSFwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"s5DRxvMycx",
"Hye3y1j9jH",
"Byl3LKt9sH",
"r1lL75ucsS",
"rkgVbVOqjS",
"Bkex8DIYoS",
"HkxlmTgAqS",
"SyeMN893cr",
"HkgAT1el5S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748786,
1573723875606,
1573718356041,
1573714461987,
1573712892023,
1573640007981,
1572896024217,
1572804137772,
1571975109711
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2424/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2424/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2424/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2424/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2424/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2424/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2424/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2424/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"All reviewers rated this paper as a weak reject.\\nThe author response was just not enough to sway any of the reviewers to revise their assessment.\\nThe AC recommends rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your review and valuable advice, we have added an overview diagram as the illustration of the whole pipeline of our method. Please see the new Figure 2 for details.\\n\\nTemporal Point Process is a powerful mathematical tool for modeling sequences of interactions[1]. The ability to discover correlations among interactions is crucial to accurately predict the future of a sequence given its past, i.e., what interactions are likely to happen next, when they will happen and between which participants. And the key point to characterize temporal point processes is via the conditional intensity functions \\u03bb(t). Formally, \\u03bb(t)dt is the conditional probability of observing an event in a small window [t, t + dt) given the history H(t) up to t and that the event has not happen before t.\\n\\nIn this paper, we adopt a neural approach named DIP to model the conditional intensity functions given all past graph-structured dependency history. \\n\\nAs the toy-example shown in Figure 2 , The pipeline of our work is as follows : \\n1. Propose a temporal dependency graph(TDG) concept to depict complex dependencies among time-evolving interactions. \\n2. Learn representation using DIP units, i.e. ,graph-structured LSTM with time gates to handle irregular time intervals. \\n3. Considering computational burden, when an interaction(event) happened, we only use its k-depth subgraph to compute their new representations(Fig2.d) which is similar to chain-LSTM training unfolded with max k steps and k-hops setting in static graph. \\nThe figure 3 gives an illustrative example for Selection and Fusion mechanism.\\n\\nAs for your question \\n1. It is unclear how and why the temporal point process can deal with growing/shrinking graph nodes and changing interactions ?\", \"ans1\": \"The above description show the whole pipelines.\\n\\n2. how does the DIP-UNIT handle the continuous graph changing?\", \"ans2\": \"Each time the graph changes, which means there are new interactions. According to Algorithm 2 in Appendix A, we can incrementally update the temporal dependency graph. Then, we feed the new temporal subgraph of the involved nodes into the neural network, and update the dynamic representations for them.\\n\\n3. What if the graph changes with an uneven speed?\", \"ans3\": \"The design of the time gates is intended to handle incoming interaction with different time intervals(i.e., uneven speed). Specifically, the time gates combined with forget gates control the information flow in and out to the memory cells of the DIP units. Then the final representation and intensity function not only have graph-structure information, but also frequencies and speed of an evolving graph.\\n\\n4. How large a graph could be ?\", \"ans4\": \"In this paper, we focus on time-evolving interactions, so the graph is growing larger and larger with time as long as a new interaction occurs. As for efficiency, Algorithm2 in Appendix A provides a way to construct k-depth subgraph incrementally with o(m+n) time complexity where m, n are the nodes in subgraphs. Meanwhile, there is a big advantage we can train our model in parallel since we only consider k-depth subgraphs while the baseline methods can only train and update the states sequentially.\\n\\n5. how fast its changes could be captured?\", \"ans5\": \"we are processing continues time-evolving graph. See the answer of question 2.\\n\\n\\n[1]DJ Daley and D Vere-Jones. An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. 2007.\"}",
"{\"title\": \"Response to Review 3-II\", \"comment\": \"we now answer your comments about DIP experiments.\\nIn our updated version, we provide a comparison with your suggested baseline JODIE and provide results analysis in 5.3.2 and 5.4.2. Meanwhile we investigate the effects of different k and L on interaction prediction and interaction classification(Appendix B.3).\", \"q5\": \"\\\"The authors include support for new nodes for interaction classification task but remove them for interaction prediction task which is strange. Is there a specific reason for this? What is the effect on the performance if new nodes are allowed in test? Further, why is interaction classification not compared with temporal baselines? All baselines produce embeddings and the authors mention that classification for this paper is independent of marker history. While the temporal baselines do not train for the task, the authors can train a second stage classifier with learned embeddings to perform classification\\\"\", \"a5\": \"yes, there is a specific reason that we don't compare all the baselines in the interaction classification task: a. the dataset in this task has a lot of unseen nodes so the transductive method like CTDNE can't fit in this task. b. all the baseline dynamic methods are only unsupervised version while our method,GCN and gbdt are end-to-end supervised methods\", \"q6\": \"it seems datasets in experiments does have non-bipartite case? Is this true or the method only works for bipartite case?\", \"a6\": \"Our method is built for modeling dynamic interactions. Although the datasets using in our experiments are heterogeneous graphs, it could be naturally applied to isomorphic graph such as citation graph. (Dynamic citation behaviors will construct the temporal dependency graphs.)\"}",
"{\"title\": \"Response to Review 2-II\", \"comment\": \"Q4: Regarding the experimental results: All models are trained on the same grid of embedding dimensions, but the proposed method is the only deep model. Hence, its maximum number of parameters can be up to 4x compared to the shallow models. How do the results look if all for models with comparable number of parameters (i.e., can the improvements be explained due to this difference)?\", \"a4\": \"Results analysis are given in the updated sec 5.3.2 and 5.4.2 .and we have new experiments about the effects of embedding size, k and L on the two tasks--interaction classification and interaction prediction (see Appendix B.3 B.4). Even with smaller embedding size or smaller value of k and L, our method still performs well. Meanwhile it doesn't mean larger k and L combination always give the best results. All the parameters are set according to performance at validation set.\"}",
"{\"title\": \"Response to Review 2\", \"comment\": \"Thank you for your detailed comments and suggestions. We have already updated our paper with a more clear toy example as shown in Fig.2. and Fig.3. Please see it and maybe help u understand details of our work better.\", \"now_i_will_answer_your_questions_as_follows\": \"\", \"q1\": \"However, I'm concerned about different aspects of the current version: The main contributions of the paper are a recurrent (LSTM-based) architecture to model the intensity function of a TPP, stacking multiple LSTM to form a deep architecture, and a temporal attention mechanism. However, none of these contributions on its own are particularly novel. For instance, prior work that introduces similar approaches include Recurrent networks to parameterize intensity functions: (Dai,2017),(Mei\\\\&Eisner,2017), (Trivedi, 2019), - Temporal attention: (Trivedi, 2019)\", \"a1\": \"The differences between our method and Mei \\\\& Eisner, 2017 are as follows: First, our work focus on time-evolving interaction events which is based on a multi-dimension point process with each\\nuser-item pair as one dimension while their work are mainly based on a one-dimensional point process and they only consider chain-structure dependencies among events without considering interaction. Second, They view their intensity function as a nonlinear function of chain-structure histories while our intensity function is a nonlinear function of representation based on complex graph-structured histories using DIP units. \\n As for the work in Dai2017, they use mutually-recursive RNNs and incorporate the participants\\u2019 embedding to capture the dynamics of coevolution. To capture co-evolution in time-evolving graph, we propose the selection Mechanism which is used to weight important interaction in their k-hop history subgraphs of current interaction. The weighting operation is based on mutual information among interactive k-depth subgraphs. In addition, simple RNN can't capture long history well.\\n As for the temporal attention method in Trivedi 2019, we have two attention operations applied in this paper but they have different purposes. The first one is a co-attention operation in the Selection Mechanism ( see Section 3.3.2 and Fig.3 ) . The Selection Mechanism is intended to select and weight important interaction in their k-hop history subgraphs of current interaction. Specifically, a co-attention operation is first used to capture mutual information among two k-hop history subgraphs of two interactive nodes. Then based on mutual information, adaptive gates are learned to weight each history node in their corresponding k-hop history subgraphs. However, the method in Trived 2019 only consider one-hop temporal neighbors when updating nodes' dynamic representation with a simple self-attention function.\", \"q2\": \"With regard to the model: The Log-likelihood function in Section 3.6.1 seems to be incorrect as the LL for a TPP would be $L = \\\\sum_{i:t_i \\\\leq T} \\\\log\\\\lambda(t_i) - \\\\int_0^T \\\\lambda(s)ds$, which is quite different from the equations in the paper. Is the LL in Section 3.6.1 the actual objective that has been optimized?\", \"a2\": \"We carefully checked the equation and found that we missed the log symbol for intensity which was a writing mistake. It was updated now. Additionally, the equation you gave here is for a one-dimensional point process while our equation in Section 3.5.1 is a multi-dimensional point process whose survival function is a summation for all possible interaction pairs(i.e, multi-dimensions). (the same as in Trivedi 2019).\", \"q3\": \"Hence, the main novelty seems to lie in the stacked architecture and the particular combination of modules (which is of limited novelty). The experimental results are certainly interesting, but it would be important to provide a more detailed analysis of the model to get insights into the causes for these improvements.\", \"a3\": \"It seems that there is a misunderstanding on this point. As explained above, our contributions are as follows(as shown in fig.2 and fig.3): 1. we define a temporal dependency graph($TDG$) 2. we generalize the traditional chain-structured LSTM to a graph-structured LSTM with time gates(named DIP units) to depict nodes' dynamic representation in $TDG$. 3. we enhance the nodes representation by a novel selection method using a two-phase gating operation and a fusion mechanism to integrate all layers' information. 4. the state of art methods like DeepCoevol and JODIE use a RNN-like equation to update nodes states incrementally which limits computation parallel and have efficiency problems. Moreover, simple RNN can't capture dynamics in long sequences well, not to mention the complex interaction network with a long duration. However, our method uses a k-depth subgraph history information to update information like chain-lstm training unfolded with max k steps and max k-hops in GCN. Meanwhile the graph-lstm itself has good properties to capture long sequences well.\"}",
"{\"title\": \"Response to Review 3\", \"comment\": \"Thank you for your detailed comments and suggestions.\\nwe first answer your comments about DIP model itself.\", \"q1\": \"Do you also update cell states with selection mechanism? The DIP-UNIT equation in selection section does not show that update. Also, are the embeddings updated only during train or also during validation/evaluation?\", \"ans1\": \"we enhance the j-th layer dynamic representation and cell states by weighting the (j-1)-th hidden states(i.e. input for j-th layer) but not the cell states in (j-1)-th layer in current version ( where j=0 means input features). Please see our updated picture Fig.3 and Equation 7. The embedding is updated for every event during train/validation/evaluation(using the model learned on training data).\", \"q2\": \"The use of proposed Algorithm 2 is not well justified. (Part A:)Why does the author need coloring and hashing mechanism instead of simpler BFS/random walk routine to collect previous interactions?(Part B:) Also, is this subgraph created for each event or it is computed offline during training? (Part C:)Further, the subgraph used for selection mechanism same as subgraph used for backtracking in LSTM?\", \"ans2\": \"It is a good question. Please see our updated version with a more clear toy example in Fig.2. The pipeline of our work is as follows :\\n1. Propose a temporal dependency graph(TDG) concept to depict complex dependencies among time-evolving interactions. \\n2. Learn representation using DIP units, i.e. ,graph-structured LSTM with time gates to handle irregular time intervals. \\n3. Considering computational burden, when an interaction(event) happened, we only use its k-depth subgraph to compute their new representations(Fig2.d) which is similar to chain-LSTM training unfolded with max k steps and k-hops setting in static graph. \\n4. When updating dynamics of an event, how to obtain k-depth subgraph is a practical problems. The purpose of coloring method is to find TDG dependencies among interactions and then we can construct k-depth TDG incrementally for each event. So the coloring operation is a pre-step for constructing TDG graph. \\nAs for your question, A. The bfs/random walk work only on a graph which already exists, but in our situation we don't know what TDG is (as graph is growing up) and Coloring is a pre-step to help construct TDG. B. Yes, each event needs its temporal subgraph information on TDG and we can train it in parallel. This is quite different from dynamic methods like deep coevol, JODIE which use RNN-like equations to update nodes states incrementally and thus have parallel and efficiency problems when processing training data with a long time duration. C. The selection methods utilize \\\"mutual information\\\" between two subgraphs of interactive nodes to adjust importance of nodes. All the calculation is based on k-depth temporal dependency graph. The subgraph used for selection mechanism is of course the same as subgraph used for backtracking in LSTM.\", \"q3\": \"is it true that the training is done in order of ColorGraphSeq or is it done in order of dataset? How does the authors capture dependencies across dataset in later case?\", \"ans3\": \"As answered in ANS2, once we obtain the k-depth subgraph for an event(which means we only consider max k-depth history information as we do in chain-lstm), we can train it parallelly. The purpose of ColorGraphSeq is only a pre-step for constructing a TDG from a collected training data. For new datasets, we can first incrementally update and find dependencies between new data and old data(similar to line 3 to 14 in Algorithm 1) and then get updated k-depth subgraphs for new interactions.\", \"q4\": \"How does the scaling parameter and alpha affect the performance and what are their roles?\", \"ans4\": \"The purpose of fusion is to utilize all the dynamic representation of nodes at different Layers(at different time scales) to give a final representation. The alpha is a learnt parameter used to weight node representation at different layers and then sum them. Also the scaling parameter is a learnt parameter and aid the optimization process similar to Elmo\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper focuses on the problem of modeling interaction processes over dynamically evolving graphs and perform inference tasks such future interaction prediction and interaction classification. Specifically, the paper proposes a temporal point process based formulation to model the interaction dynamics where the conditional intensity function is parameterized by a recurrent network. With an occurrence of any event, the recurrent architecture updates the embeddings of the nodes involved in that event which then affects the intensity function and hence the likelihood of future events. The paper uses intensity based likelihood to train for future interaction prediction task while cross-entropy based loss for classification task. The paper demonstrates the efficacy of the method through experiments across multiple datasets and compare against representative baselines and further provides ablation analysis for the proposed architecture.\\n\\nThe paper demonstrates markedly improved empirical performance on multiple datasets and also performs the task of interaction classification which is not seen in recent works on evolving graphs, which are plus points. However, there are several concerns with the overall work that makes this paper weaker: (1) The main concern is with the novelty and more importantly the justification/analysis of the contributions proposed approach. (2) Further, while the ablation study provides some insights into architecture, it is not adequate (3) The paper misses comparison with a very important and recently proposed baseline, JODIE [1].\", \"main_comments\": [\"--------------\", \"The paper leverages existing techniques built for learning over evolving graphs and augments it with three modifications: explicit use LSTM with time gate, stacked LSTM approach with fusion (Aggregation) and attention mechanism to select important neighbors to contribute to embedding update. The use of LSTM with time gate and fusion mechanism is very incremental contribution. The attention mechanism proposed here is novel compared to existing works. However, there is very little justification or analysis provide or either of the contributions. This is big drawback of this contribution.\", \"For instance, the authors mention that stacked LSTM is used to capture multiple resolution. Can they provide some analysis or empirical demonstration that this actually happens? Also, the authors mention they use K in range of {1,2,3,4} but do not provide details what was useful for each dataset and how is it useful. How does the scaling parameter and alpha affect the performance and what are their roles? Also, what does superscript 'task' signify?\", \"Similarly, they propose coattention mechanism with adaptive gate functions but does not provide any analysis of why they are useful and what characteristics they capture in the data that allows it to select most relevant neighbors. Is the attention mechanism temporally dependent?\", \"The authors perform ablation studies by switching off each component as a whole but considering the way this architecture is built, this is not a very useful exercise except knowing that each component contributes to the performance. A more detailed analysis and ablation is required. For instance, can the authors show performance with different K and how it deteriorates/improves with it? Also, for stacked LSTM case, the authors show what happens when you use last layer, but what happens if the authors use only one layer (I guess this is K=1?) or don't use residual connections? When the time gate is switched off, does the authors also remove deltas from intensity function? what happens in this scenario? How does subgraph depth affect the quality of performance? What happens if authors don't sue adaptive gate functions?\", \"Figure 2 shows an example of bipartite graph, however, it seems datasets in experiments does have non-bipartite case? Is this true or the method only works for bipartite case?\", \"The use of proposed Algorithm 2 is not well justified. Why does the author need coloring and hashing mechanism instead of simpler BFS/randomwalk routine to collect previous interactions? Also, is this subgraph created for each event or it is computed offline during training? Further, the subgraph used for selection mechanism same as subgraph used for backtracking in LSTM?\", \"Further, is it true that the training is done in order of ColorGraphSeq or is it done in order of dataset? How does the authors capture dependencies across dataset in later case?\", \"Do you also update cell states with selection mechanism? The DIP-UNIT equation in selection section does not show that update. Also, are the embeddings updated only during train or also during validation/evaluation?\", \"The authors only present the results as-is without any insights on the performance of DIP model vs others and why they are able to demonstrate good performance. It is highly desired that authors add discussion section for each set of results to provide such information\", \"The authors include support for new nodes for interaction classification task but remove them for interaction prediction task which is strange. Is there a specific reason for this? What is the effect on the performance if new nodes are allowed in test? Further, why is interaction classification not compared with temporal baselines? All baselines produce embeddings and the authors mention that classification for this paper is independent of marker history. While the temporal baselines do not train for the task, the authors can train a second stage classifier with learned embeddings to perform classification\", \"The authors do not compare with recently proposed JODIE [1] which is a big miss. The comparison is required as it also models interaction processes in a novel way by actually predicting the next embedding directly instead of modeling the intensity. An empirical comparison and discussion of this method is required to compare with various state-of-art methods.\"], \"minor\": \"-------\\n\\n- The authors need to use better and consistent notations. Also, as the overall approach uses similar flow as previous papers such as DeepCoevolve, it is recommended that the authors make the presentation simpler to position it clearly with existing works. On page 3, section 3.2 both bold-face and normal letters are used as vectors. Is $\\\\hat{x}_{u(t)}$ a vector?\\n\\n- Please provide numbers to equations for better referencing\\n\\n[1] Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks, Kumar et. al. KDD 2019\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is concerned with modeling continuous time-evolving graphs, for which it proposes to combine temporal point processes with a recurrent architecture to learn dynamic node representations. In addition, the paper proposes to stack multiple recurrent layers (to obtain node representations over multiple time scales) and use a temporal attention mechanism (to select relevant past interactions).\\n\\nModeling temporal and dynamic graphs is an important problem with many applications in ML and AI. The focus of the paper, i.e., to develop improved models by combining TPPs and representation learning, is a promising approach to this task and fits well into ICLR. Furthermore, the presented experimental results are promising.\\n\\nHowever, I'm concerned about different aspects of the current version: The main contributions of the paper are a recurrent (LSTM-based) architecture to model the intensity function of a TPP, stacking multiple LSTM to form a deep architecture, and a temporal attention mechanism. However, none of these contributions on its own are particularly novel. For instance, prior work that introduces similar approaches include \\n- Recurrent networks to parameterize intensity functions: (Dai, 2017), (Mei & Eisner, 2017), (Trivedi, 2019), ... \\n- Temporal attention: (Trivedi, 2019) \\n\\nHence, the main novelty seems to lie in the stacked architecture and the particular combination of modules (which is of limited novelty). The experimental results are certainly interesting, but it would be important to provide a more detailed analysis of the model to get insights into the causes for these improvements.\", \"with_regard_to_the_model\": \"The Log-likelihood function in Section 3.6.1 seems to be incorrect as the LL for a TPP would be L = \\\\sum_{i:t_i \\\\leq T} \\\\log\\\\lambda(t_i) - \\\\int_0^T \\\\lambda(s)ds, which is quite different from the equations in the paper. Is the LL in Section 3.6.1 the actual objective that has been optimized?\", \"regarding_the_experimental_results\": \"All models are trained on the same grid of embedding dimensions, but the proposed method is the only deep model. Hence, its maximum number of parameters can be up to 4x compared to the shallow models. How do the results look if all for models with comparable number of parameters (i.e., can the improvements be explained due to this difference)? It would also be good to get results on commonly used benchmarks (e.g. data used in DyRep or NeuralHawkes) to make the results of the new model comparable to prior experiments and datasets.\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers modeling continuous time-evolving graphs using a temporal point process framework. It introduces a time gate in the LSTM to handle the temporal dependency and uses an attention mechanism to select relevant nodes to learn the underlying dynamics.\\n\\nOverall, this paper is not easy to understand in detail. Firstly, it is unclear how and why the temporal point process can deal with growing/shrinking graph nodes and changing interactions. Secondly, how does the DIP-UNIT handle the continuous graph changing? What if the graph changes with an uneven speed? Thirdly, how do all the small pieces work together to achieve the goal of the paper? An overview diagram or a toy example would greatly improve the readability of the paper. \\n\\nBesides, what is the computational cost of the proposed network? How large a graph could be and how fast its changes could be captured?\"}"
]
} |
rJleKgrKwS | Differentiable learning of numerical rules in knowledge graphs | [
"Po-Wei Wang",
"Daria Stepanova",
"Csaba Domokos",
"J. Zico Kolter"
] | Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., ”People younger than 18 typically live with their parents“. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets. | [
"knowledge graphs",
"rule learning",
"differentiable neural logic"
] | Accept (Poster) | https://openreview.net/pdf?id=rJleKgrKwS | https://openreview.net/forum?id=rJleKgrKwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"2MTnG4OKuk",
"rJx2GYj3oB",
"BylaDvshjB",
"Skl_X6F3ir",
"BJgJZYS9ir",
"r1xPX_S5jS",
"r1e3Nvr5sH",
"SJx_7NveqH",
"BJekrJ9atS",
"S1eJGUy6Yr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748755,
1573857556113,
1573857124896,
1573850400007,
1573701879355,
1573701663434,
1573701427528,
1572004895978,
1571819318760,
1571776006726
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2423/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2423/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2423/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2423/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a number of improvements on existing approaches to neural logic programming. The reviews are generally positive: two weak accepts, one weak reject. Reviewer 2 seems wholly in favour of acceptance at the end of discussion, and did not clarify why they were sticking to their score of weak accept. The main reason Reviewer\\n 1 sticks to 6 rather than 8 is that the work extends existing work rather than offering a \\\"fundamental contribution\\\", but otherwise is very positive. I personally feel that\\na) most work extends existing work\\nb) there is room in our conferences for such well executed extensions (standing on the shoulders of giants etc).\\n\\nReviewer 3 is somewhat unconvinced by the nature of the evaluation. While I understand their reservations, they state that they would not be offended by the paper being accepted in spite of their reservations.\\n\\nOverall, I find that the review group leans more in favour of acceptance, and an happy to recommend acceptance for the paper as it makes progress in an interesting area at the intersection of differentiable programming and logic-based programming.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for answering -- about 2, I mean the extracted rules still do not support less trivial operations such as aggregations (sum, mean, ..) and math operations (such as the sum). There is some work investigating how to learn these using neural architectures, such as https://arxiv.org/abs/1808.00508 . It would have been great to see them in here but I understand it can be tricky.\"}",
"{\"title\": \"RE: Response to AnonReviewer1\", \"comment\": \"Thanks for the reply and also the comments about handling existential quantifiers. Given this, I think I'm fine with the limitations in the current work.\"}",
"{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the response! However, the response does not resolve my concern about whether the task is significant enough in practice. Also the experiment part is not updated accordingly. Therefore I will not change the rating.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We appreciate the comments of the reviewer. Please see our reply below.\\n\\n1) - \\\"... the current proposed method can only deal with one form of numerical predicate, which is numerical comparison.\\\"\\n\\nApart from simple numerical comparison we are also able to deal with complex classification operators that aggregate numerical attributes using linear functions, where the threshold value is selected in a systematic fashion, (see Classification Operators) as well as negated atoms (see Negated Operators on p. 6). We note that such rules are indeed limited to some extent, but they still capture a rather expressive fragment of answer set programs with restricted forms of external computations [Eiter et al., 2012].\\nBelow we present examplar rules learned by our framework, which are not restricted to numerical comparisons.\\n\\n2a) - \\\"The paper does not do a great job of convincing the reader that the problem it is trying to solve is an important matter, or the proposed method is indeed effective in some applications.\\\"\\n\\nWith the rapid development of industrial and scientific knowledge graphs, we believe (and agree with the Reviewer #2) that learning rules that involve multiple modalities is an important and relevant problem. Indeed, such rules can not only be used for data cleaning and completion, but they are also themselves extremely valuable assets carrying human-understandable structures that support both symbolic and subsymbolic representations and inference.\\n\\n2b) - \\\"The authors should try to find a real-world domain which can really demonstrate the effectiveness of the method.\\\"\\n\\nTo the best of our knowledge Freebase and DBPedia are the only standard KGs with numerical values [Garcia-Duran et al., 2018] used for the evaluation in state-of-the-art works. This is the reason why we have selected and used them for our experiments. The impact of our approach might appear to be rather modest, since these KGs still have only a limited amount of numerical information. Therefore, to demonstrate the power of our approach further, we have also performed evaluation on the synthetic datasets. We would be happy to learn about other datasets suitable for our experiments.\\n\\n3) - \\\"The experiment section lacks more detailed analysis which can intuitively explain how well the proposed method performs on the benchmarks. A good place to start with is to visualize (print out) the learned numerical rules and see if they make any sense.\\\"\\n\\nAccording to the Reviewer's comment we will extend Section 5 on experimental results by showing more detailed analysis. In particular, we will present the following examples of the learned rules from the considered (real-world and synthetic) datasets:\\n\\n- FB15K:\\n\\tdisease_has_risk_factors(X,Z) :- f(X), symptom_of_disease(X,Y), disease_has_risk_factors(Y,Z)\\nThe rule states that symptoms with certain properties (described by the function f) typically provoke risk factors inherited from diseases which have these symptoms. Here, the function f is the sigmoid over a linear combination of numerical properties of X.\\n\\n- DBPedia:\\n\\tdefends(X,Z) :- primeMinister(Z,Y), militaryBranch(Y,X), f(Y)\\nThis rule states that prime ministers of countries with certain numerical properties (described by the function f), are supported by military branches of the given country. The function f is the sigmoid over a linear combination of numerical properties of Y.\\n\\n- Numerical1:\\n\\tprefer(X,Y) :- isNeighbourTo(X,Y), hasOrder(X,Z1), hasOrder(Y,Z2), Z1>Z2, max{Z2:hasOrder(Y,Z2)}\\nThis rule with a comparison operator states that a person X prefers neighbours with the maximal order that is less than X's.\\n\\n- Numerical2:\\n\\tprefer(X,Y) :- isNeignborTo(X,Y), hasBalance(Y,Z1), borrowed(Y,Z2), f(Y)\\nThis rule states that neighbours with the largest difference between the balance and the borrowed amount are preferred.\\nMore precisely, here f selects among all X those entities, for which the difference between the balance and the borrowed amount is maximal.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We appreciate the Reviewer's comments, which help us to improve the paper. In the final version of the paper we will take them into consideration. In the following we reply to the main concerns of the reviewer.\\n\\nQ1 - \\\"... how general this approach would be? ...if rules contain quantifiers, how would this be extended?\\\"\\nThe extendibility of the Neural LP framework is a very important and relevant question, which we also mentioned explicitly as a possible future work direction.\\nIn the rules that we support in our framework all variables are universally quantified. While learning rules with existential quantifiers in rule heads is a difficult endeavor in general, even for classical relational learners, the Neural LP framework in principle can be extended to support them as follows: For every relation p, we can create a fresh diagonal Boolean matrix $M_{\\\\exists p}$, which has 1 at the position (i,i) iff there exists an entity j, such that p(i,j) is in the KG (similar as for classification operators discussed on p. 5). Incorporating these matrices into the framework and filtering rules that have the respective relations in the head should allow us to extract the target rules. Yet analysing how well such approach performs in practice is still an open problem, which we leave for future work. In any case, we will discuss the extendability of the framework in the paper. \\n\\nMinor comment 1) - 4.1, \\\"O(n^2/2) -- just put O(n^2) or simply write as n^2/2\\\".\\nThis is correct, thank you. We will fix this in the final version.\\n\\nMinor comment 2) - \\\"How are the rules from in Eq (2)? i.e., how is \\\\beta_i selected for each i? In the extreme case it would be all the permutations.\\\"\\n\\nTo avoid exponential enumeration of the predicate orderings sophisticated transformation of the rules has been applied in the Neural LP framework (see [Yang et al. 2017]).\\n\\nMinor comment 3) - \\\"I would suggest a different name other than Neural-LP-N...\\\"\\nThanks for this suggestion. We will certainly consider renaming the approach and fixing this in Table 2.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We are really thankful for the positive feedback. Here we give detailed answers to the Reviewer's concerns.\\n\\n1) - \\\"... in Table 2, AnyBurl ... yields better Hits@10 values than Neural-LP-N, but the corresponding bold in the results is conveniently omitted.\\\"\\n\\nThanks for pointing this out! We will make the presentation of the results consistent by highlighting the respective number.\\n\\n2) - \\\"... the expressiveness of the learned rules can be somehow limited,...\\\"\\n\\nWe remark that our framework supports rules with negation, comparison among numerical attributes and classification operators, where linear functions over attributes can be expressed. Such rules capture a fragment of answer set programs, where a limited form of aggregation [Faber et al., 2011] and restricted external computation functions [Eiter et al., 2012] are allowed. While these rules might not cover all possible knowledge constructs, they are still valuable and rather expressive for encoding correlations among numerical and relational features. Moreover, to the best of our knowledge they have not been directly supported by previous works on rule learning.\\n\\n3) - \\\"Missing references - authors may want to consider citing https://arxiv.org/abs/1906.06187 ...\\\"\\n\\nThanks for referring us to this important work! We will certainly add this reference to the paper.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an extension of NeuralLP that is able to learn a very restricted (in terms of expressiveness) set of logic rules involving numeric properties. The basic idea behind NeuralLP is quite simple: traversing relationships in a knowledge graph can be done by multiplicating adjacency matrices, and which rules hold and which ones don't can be discovered by learning an attention distribution over rules from data.\", \"the_idea_is_quite_clever\": \"relationships between numeric data properties of entities, such as age and heigh, can also be linked by relationships such as \\\\leq and \\\\geq, and those relations can be treated in the same way as standard knowledge graph relationship by the NeuralLP framework.\\n\\nA major drawback in applying this idea is that the corresponding relational matrix is expensive to both materialise, and use within the NeuralLP framework (where matrices are mostly sparse). To this end, authors make this process tractable by using dynamic programming and by defining such a matrix as a dynamic computation graph by means of the cumsum operator. Furthermore, authors also introduce negated operators, also by defining the corresponding adjacency matrices by means of computation graphs.\\n\\nAuthors evaluate on several datasets - two real world and two synthetic - often showing more accurate results than the considered baselines.\\n\\n\\nOne thing that puts me off is that, in Table 2, AnyBurl (the single one baseline authors considered other than the original NeuralLP) yields better Hits@10 values than Neural-LP-N, but the corresponding bold in the results is conveniently omitted.\\n\\nAnother concern I have is that the expressiveness of the learned rules can be somehow limited, but this paper seems like a good star towards learning interpretable rules involving multiple modalities.\", \"missing_references___authors_may_want_to_consider_citing_https\": \"//arxiv.org/abs/1906.06187 as well in Sec. 2 - it seems very related to this work.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposed several extensions to the Neural LP work. Specifically, this paper addresses several limitations, including numerical variables, negations, etc. To efficiently compute these in the original Neural LP framework, this paper proposed several computation tricks to accelerate, as well as to save memory. Experiments on benchmark datasets show significant improvements over previous methods, especially in the case where numerical variables are required.\\n\\nI think overall the paper is written clearly, with good summarization of existing works. Also I like the simple but effective tricks for saving the computation and memory.\\n\\nOne main concern is, how general this approach would be? As it is a good extension for Neural LP, it is not clear that the framework of Neural LP is flexible or powerful enough in general. For example, if rules contain quantifiers, how would this be extended?\", \"minor_comments\": \"1) 4.1, \\u201cO(n^2/2)\\u201d -- just put O(n^2) or simply write as n^2/2.\\n2) How are the rules from in Eq (2)? i.e., how is \\\\beta_i selected for each i? In the extreme case it would be all the permutations.\\n3) I would suggest a different name other than Neural-LP-N, as it is somewhat underselling this work. Also it makes Table 2 not that easy to read.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an interesting extension to the Neural LP framework for learning numerical rules in knowledge graphs. The proposed method can handle predicates involving the comparison of the numerical attribute values. The authors demonstrate its effectiveness on both synthetic knowledge graphs and the parts of existing knowledge graphs which consider numerical values.\", \"i_recommend_the_paper_to_be_rejected_in_its_current_form_for_the_following_3_reasons\": \"(1) Although the idea of making numerical rules differentiable is interesting, the current proposed method can only deal with one form of numerical predicate, which is numerical comparison. The limitation to such a special case makes the paper somewhat incremental. \\n\\n(2) The paper does not do a great job of convincing the reader that the problem it is trying to solve is an important matter, or the proposed method is indeed effective in some applications. Although the proposed method does a good job in synthetic experiments, outperforming existing methods by a large margin, its performance on the numerical variants of Freebase/DBPedia dataset does not show consistent significant improvement. The authors should try to find a real-world domain which can really demonstrate the effectiveness of the method.\\n\\n(3) The experiment section lacks more detailed analysis which can intuitively explain how well the proposed method performs on the benchmarks. A good place to start with is to visualize(print out) the learned numerical rules and see if they make any sense. The experiment section needs significant improvement, especially when there is space left.\\n\\n\\nThe authors can consider improving the paper based on the above drawbacks. I encourage the authors to re-submit the paper once it's improved.\"}"
]
} |
S1lxKlSKPH | Consistency Regularization for Generative Adversarial Networks | [
"Han Zhang",
"Zizhao Zhang",
"Augustus Odena",
"Honglak Lee"
] | Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization—a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012. | [
"Generative Adversarial Networks",
"Consistency Regularization",
"GAN"
] | Accept (Poster) | https://openreview.net/pdf?id=S1lxKlSKPH | https://openreview.net/forum?id=S1lxKlSKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"jQjxv-1OEo",
"6wbeLgWEBX",
"mVJ9kG4F7z",
"-fTaMmnhJU",
"B1geRB5ooB",
"SyxNxScisH",
"BJgYim5oiS",
"SJxo5bcioB",
"BkxG8pKiiS",
"BJg7Y9MR5H",
"rkli38WvcH",
"SJg_aihN9S",
"H1eKjX745B"
],
"note_type": [
"comment",
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1583151212394,
1580148586721,
1577153492309,
1576798748717,
1573787080137,
1573786860375,
1573786529401,
1573786002726,
1573784906370,
1572903547463,
1572439730566,
1572289472292,
1572250529338
],
"note_signatures": [
[
"~Junsoo_Ha1"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"~Shichang_Tang1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2422/Authors"
],
[
"~Weihua_Hu1"
],
[
"ICLR.cc/2020/Conference/Paper2422/AnonReviewer6"
],
[
"ICLR.cc/2020/Conference/Paper2422/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2422/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Question on the implementation of SNDCGAN\", \"comment\": \"Hi, I would like to ask a minor question on the implementation of SNDCGAN used for the experiments.\\n\\nThe paper mentions that all the experiments are done with the open-source code from Compare GAN [1], which I have found that they are using larger D (64-128-128-256-256-512-512 conv blocks) [2] than original SNDCGAN architecture (64-64-128-128-256-256-512 conv blocks) [3]. Could you clarify on the actual detailed architecture ([1] or [2]) used for the experiments? \\n\\nThanks!\\n\\n\\n[1] A Large-Scale Study on Regularization and Normalization in GANs, Karol Kurach, Mario Lu\\u010di\\u0107, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly; ICML 2019\\n[2] https://github.com/google/compare_gan/blob/19922d3004b675c1a49c4d7515c06f6f75acdcc8/compare_gan/architectures/sndcgan.py#L121\\n[3] Spectral Normalization for Generative Adversarial Networks, Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida; ICLR 2018\"}",
"{\"title\": \"Thanks for your comments.\", \"comment\": \"Thanks for your valuable suggestions.\\nAdding noise in the intermediate layers to enforce consistency is an interesting direction and we will explore in our future work . \\nAs you mentioned, [1] is adding dropout in the hidden layers as perturbation. In this paper, we mainly focus on augmentation on the original data. In this way, we can use prior knowledge to choose some domain specific augmentations to enforce consistency in the data manifold. For example, in the image domain, random flipping and shifting image pixels generally works better and adding gaussian noise on pixels can lead to degraded performance. \\nThanks for pointing out the related work and we will cite [1] in the revision of this paper.\"}",
"{\"title\": \"Dropout in the hidden layers of the Discriminator\", \"comment\": \"Hi, I would like to point out that [1] uses a consistency regularization term in an effort to enforce the Lipschitz constraint in WGAN. In their experiments, they find that adding dropout noise in the hidden layers (instead of the input) of the Discriminator for consistency regularization can improve the performance of WGAN-GP. It would be interesting if you could explore the effect of such augmentations in the intermediate layers as well.\\n\\nAs far as the BigGAN architecture is concerned, [2] finds that \\\"using dropout in D would improve training by reducing its capacity to memorize, but in practice this degrades training\\\". But would BigGAN benefit if dropout is used for consistency regularization?\\n\\n\\n[1] Wei, Xiang, et al. \\\"Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect.\\\" (ICLR 2018).\\n\\n[2] Brock, Andrew, Jeff Donahue, and Karen Simonyan. \\\"Large scale gan training for high fidelity natural image synthesis.\\\" (ICLR 2019).\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a simple and effective way to stabilize training by adding consistency term to discriminator. Given the stochastic augmentation procedure $T(x)$ the loss is just a penalty on $D$. The main unsolved question why it help to make discriminator \\\"smoother\\\" in the consistency case for a standard GAN (since typically, no constraints are enforced). Nevertheless, at the moment this a working heuristics that gives new SOTA, and that is the main strength. The reviewer all agree to accept, and so do I.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for your comments.\\n\\nIn this paper, we propose a simple, effective, and computationally cheap method \\u2013 consistency regularization \\u2013 to improve the performance of GANs. We also have conducted extensive experiments to verify the proposed method. We achieved state of the art results for both conditional and unconditional image generation. Since we have substantially improved the writing of our paper and added more experiments during the rebuttal process, we would be grateful if the reviewer would take another look of the updated version.\\n\\nSee also the other two reviews for unbiased opinions on the merits of our submission.\"}",
"{\"title\": \"Thanks for your review (Response to Q7-Q8)\", \"comment\": \"Thank you for all the valuable comments.\", \"q7\": \"Consistency regularization vs data augmentation\\n\\nWe are not sure whether we understood your comments clearly, but here we tried to provide our response with our best attempt to interpret your comments. (Apologizes if we misunderstood your question, and in such a case, we will further appreciate if you can clarify us with further comments/question accordingly). To be on the same page, we first want to clarify that, by \\u201cdata augmentation\\u201d, we mean applying transformation to the original real images and treating them as additional data with label \\u201creal\\u201d for binary classification task in discriminator training. In contrast, consistency regularization uses \\u201caugmented data\\u201d but uses them differently by enforcing consistency of discriminator output between real image and its transformed image. In Section 4.1, our goal was to investigate whether our improvement of GAN is due to the fact that we reduce the overfitting of discriminator (in terms of better classifying images into real vs fake) or whether consistency regularization provides a special kind of regularization to the discriminator. If the reason is the former, simply applying data augmentation should have already provided similar benefits. However, according to our experiments, it is not the case. \\n\\nThis suggests an interesting interpretation, which is that the mechanism by which the consistency regularization improves GANs is not simply discriminator generalization (in terms of classifying images into real vs fake). We believe that the main reason for the impressive gain from the consistency regularization is due to learning more semantically meaningful representation for the discriminator. More specifically, data augmentation will simply treat all real images and their transformed images with the same label as real without considering semantics, whereas our consistency regularization further enforces learning implicit manifold structure in the discriminator that pulls semantically similar images (i.e., original real image and the transformed image) to be closer in the discriminator representation space. We will clarify this further in the revision. \\n\\nQ8. Combining different transformations\\n\\nWe have two possible reasons that combining augmentations does not give the best result. First, combining augmentations can be also considered as adding stronger regularization, and stronger regularization only helps the model performance within a certain range. Second, generator sometimes also generate samples with augmented artifacts (e.g. cutout). If such artifacts do not exist in the real dataset, it might lead to worse FID performance. \\n\\nTo be more clear, the goal of this experiment is to show that not all augmentations are useful for consistency regularization for GANs. We think further study of data augmentation in consistency regularization for GANs will be an interesting direction. For example, we have seen wide studies about data augmentation for image classification [2][3]. However, different from image classification augmentation, we believe the image augmentation in consistency regularization for GANs needs more careful design to make the resulting image not too far away from real data distribution. We will revise the section to make it more clear. \\n\\n[2] Cubuk, Ekin D., et al. \\\"Autoaugment: Learning augmentation policies from data.\\\" arXiv preprint arXiv:1805.09501 (2018).\\n[3] Lim, Sungbin, et al. \\\"Fast autoaugment.\\\" arXiv preprint arXiv:1905.00397 (2019).\"}",
"{\"title\": \"Thanks for your review! (Response to Q1-Q6 )\", \"comment\": \"Thank you for all the valuable comments.\", \"q1\": \"Related work [1]\\n\\nThank you for pointing out the related work. We cited this paper in our revision.\", \"q2\": \"Regularizing with features vs output\\n\\nIn our method, we penalize sensitivity of the last layer (which is one dimensional) of the discriminator. It is actually the output for both hinge loss and Wasserstein loss. We add the consistency regularization before sigmoid activation for NS loss to be consistent with the other two losses. Since sigmoid function will squash the range of the output, we would need large regularization coefficient to mitigate this. We also verified this experimentally. On CIFAR-10 with DCGAN structure and NS loss:\\n\\nSetting FID \\nCR before sigmoid (\\\\lambda=10) 19.71\\u00b10.28\\nCR after sigmoid(\\\\lambda=10) 22.23\\u00b10.85\\nCR after sigmoid(\\\\lambda=100) 19.75\\u00b10.24\\n\\nThe reason we did not classify the transformed images as real is that we reasoned that consistency cost is more informative than pure 0 or 1 labels. In other words, classifying with 0/1 loss will treat all real images and their transformed images with the same label as \\u201creal\\u201d without considering semantics, whereas our consistency cost further enforces learning implicit manifold structure that pulls semantically similar images (original real image and the transformed image) to be closer. We will clarify this further in the revision. \\n\\nWe also added the ablation study for the sensitivity of different layers. Details can be seen in our reply to Q2 of Reviewer1 and Appendix G.\", \"q3\": \"Measure of standard deviation in experiments\\n\\nWe agree with the reviewer and will update the paper to report the result more systematically . It's also worth mentioning that the box plots in our paper help to show the variance of the experiments.\", \"q4\": \"Effect of CR regularization on the discriminator output\\n\\nYes, we have verified it experimentally. With consistency regularization, the average output distance between real and augmented sample is 0.00449\\u00b10.00149. However, without consistency regularization, the average output distance keeps increasing during training.The final average distance is 4.50\\u00b11.54, which is roughly around 1000 times larger than the one with consistency regularization.\", \"q5\": \"Consistency regularization on the images sampled from the generator\\n\\nYes, we have tried to add the consistency on the generator outputs as well. In such case, \\nthe computational cost is doubled but the performance gains vary according to different experiment settings. It improves FID from 20.21\\u00b10.28 to 15.51\\u00b10.25 for SNDCGAN, but it also gives slightly worse results for ResNet from 14.93\\u00b10.40 to 15.07\\u00b10.34 and for CR-BigGAN* from 11.48\\u00b10.21 to 12.51\\u00b10.21.\\nWe have added the discussion and more results in Appendix H.\", \"q6\": \"Train/test accuracy of discriminator\\n\\nWe have added the training accuracy in Figure 5. For the vanilla GAN, training accuracy of the discriminator is over 0.7, but the test accuracy is around 0.2. It indicates the discriminator is overfitting in such cases.\"}",
"{\"title\": \"Thanks for your review!\", \"comment\": \"Thank you for your valuable comments.\", \"q1\": \"Effect of different types of transformations\\n\\nWe added experiments to examine different augmentations including random flip, shift, zoom, rotation, brightness and cutout on both CIFAR-10 and CelebA dataset with SNDCGAN and NS loss.\", \"the_results_are_listed_below\": \"Dataset shift and flip brightness zoom rotate cutout gaussian SBZR*\\nCIFAR-10 20.50\\u00b10.12 25.83\\u00b10.18 30.44\\u00b10.36 28.58\\u00b10.14 22.52\\u00b10.42 36.72\\u00b11.77 27.82\\u00b10.83\\nCelebA 18.84\\u00b10.25 26.81\\u00b10.61 24.51\\u00b10.42 45.06\\u00b16.62 24.86\\u00b10.33 44.47\\u00b10.45 23.80\\u00b10.36\\n*SBZR means the combination of shift&flip, brightness, zoom and rotate. \\n\\nFrom these two datasets, shifting and flipping achieves the best result and adding gaussian noise usually achieves the worst result, which is consistent with our findings in Sec. 4.2. For CIFAR-10 dataset, changing brightness is better than zooming and rotation, whereas for CelebA, zooming is better compared to rotation and changes in brightness. We think the performance for different augmentations depends on data distribution of different datasets. For example, in the CelebA dataset, zooming effect is more natural than rotation, since the face images are quite well aligned.\", \"q2\": \"Effect of the number of intermediate layers on FID\\n\\nWe added one more experiment to study the effect of different numbers of intermediate layers in CR-GAN. We add consistency regularization to the last k intermediate layers. We use two weighting variations to combine the consistency loss across different layers.\\nIn the first setting, the weight of each layer is the inverse of feature dimension in that layer. In the second setting, we give equal weight to each layer. The FID scores for both settings are shown below. \\n\\nnumber of intermediate layers weight setting1 weight setting2\\n k=0 19.41\\u00b10.57 19.48\\u00b10.41\\n k=1 19.76\\u00b10.88 20.53\\u00b11.01\\n k=2 19.66\\u00b10.36 19.65\\u00b10.57\\n k=3 19.31\\u00b10.53 21.57\\u00b10.51 \\n k=4 19.57\\u00b10.09 25.03\\u00b10.95\\n k=5 20.61\\u00b10.26 111.08\\u00b1107.43\\n k=6 20.99\\u00b10.52 411.31\\u00b113.88\\n k=7 21.45\\u00b10.46 460.83\\u00b126.21\\n k=8 23.32\\u00b10.30 386.38\\u00b172.37\\n\\nIn both settings, we observe that consistency regularization on the final layer (k=0) achieves reasonably good results. In addition, adding the consistency to the first few layers in the discriminator harms the performance. For simplicity, we only add consistency regularization in the final layer of the discriminator for the rest of our experiments. We also add more details in Appendix G.\", \"q3\": \"Experiments on unconditional setting\\n\\nWe have done experiments for both conditional and unconditional settings. We have updated the paper to make this more clear. In the updated version, Sec 3.2 is for the unconditional setting and Sec 3.3 is for the conditional setting.\", \"regarding_the_minor_comments\": \"Thank you for all these valuable suggestions. We have edited our paper accordingly. For example, \\n(1) We labeled the subplot in Figure 2.\\n(2) We added the best FID for each method in Table 1.\\n(3) We showed the results to cover all the regularization coefficient in Figure 3. \\n(4) We added the implementation details in Appendix A\\n(5) We added the illustrations of generated samples in Appendix D and E. \\n(6) We fixed the typo in the reference.\"}",
"{\"title\": \"Thanks for your comments!\", \"comment\": \"Thank you for the comments.\\n\\nWe would like to point out that consistency regularization is not a new concept. As mentioned in our paper, it has been widely used in semi-supervised learning [1-5] and other domains like discrete representation learning (as in your work, which we will add a citation for). Eq (3) in your paper uses KL divergence for the consistency loss, while we use mean squared error. However, (as we mention in Sec. 2.2 of our paper), both are the common forms of consistency regularization. \\n\\nTo the best of our knowledge, our work is the first to incorporate consistency regularization into the GAN framework and demonstrate significant improvement over prior state-of-the-art GAN results.\\n\\n[1] Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In NeurIPS, 2016\\n[2] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.\\n[3] Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. In NeurIPS, 2018.\\n[3] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, Lucas Beyer. S4L: Self-Supervised Semi-Supervised Learning. arXiv preprint arXiv:1905.03670, 2019.\\n[4] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848, 2019\\n[5]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. MixMatch: A holistic approach to semi-supervised learning. arXiv:1905.02249, 2019.\"}",
"{\"title\": \"Missing reference on the use of data augmentation for consistency training\", \"comment\": \"I would like to kindly point out that our work [1] also uses consistency training with data augmentation in the context of unsupervised learning. In fact, your Eq. (3) is quite similar to our consistency loss in Eq. (3) of our paper. I think it would be great for you to discuss relation to our work in your paper. Thanks!\\n\\nWeihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama. \\nLearning Discrete Representations via Information Maximizing Self Augmented Training. \\nInternational Conference on Machine Learning (ICML2017), 2017.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #6\", \"review\": \"Summary:\\nThe paper presents a new regularization technique termed consistency regularization for training GANs. The idea is the following: the authors propose to penalize the sensitivity of the last layer of the discriminator to augmented images. This idea is simple yet efficient: it is easy to implement, a regularization term is gradient-free, and its computation is up to 1.8 times faster than standard gradient-based regularization techniques. The authors tested different augmentation techniques and concluded that simple ones behave better (e.g., shifting and flipping). The experimental results show an impressive gain in FID measure, renewing the current state-of-the-art score for class conditional image generation on CIFAR-10 dataset.\", \"pros\": \"The proposed technique is very simple and intuitive; it easy to implement, and it is computationally cheap. The experiments were held for three runs with different random seeds, supporting its consistency. The paper is overall clearly written and easy to understand.\", \"cons\": \"The reported experimental results are held only for BigGAN architecture while not considering different networks to ensure the stability of the proposed regularization. Also, the paper would benefit from a clear experiment description on CelebA dataset (e.g., adding the results to Table 1).\", \"questions\": \"-Have you tried other transforms, which potentially keep images on the manifold, including zoom, resize, rotation, brightness adjustment, etc.?\\n-How the number of layers in $L_{cs}$ (formulas 2-3) affects FID? \\n-Have you considered an unconditional setting?\", \"minor_comments\": \"-It would be more convenient if the authors explicitly numerate subplots; e.g., in Figure 2, it is confusing to refer the subplots labeled by (a)-(f) as written in caption. \\n-Additionally, it would be nice to include say best FID scores over different loss functions (from Figure 2) to Table 1.\\n- In section 4.3, you wrote that you tried different $\\\\lambda$ values: {0,1,10, 100}, but Figure 4 does not cover all of them. \\n- It would be nice to add implementation details (e.g., optimizer, learning rate parameters, steps per discriminator, etc.) for better reproducibility.\\n-The paper would benefit from illustrations of generated samples.\\n-Please check the spelling of the penultimate article name in references (Zhai et al., 2019).\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper proposes to use Consistency Regularization for training GANs, a technique known to work well in unsupervised learning. The technique consists in applying a transformation to real images and enforcing that the features of the discriminator between the transformed inputs and the original inputs are similar. The author show that using this technique enables them to improve the performance of a standard GAN significantly on CIFAR10. They also carry an ablation study studying the influence of the different part of the proposed technique.\\n\\nOverall I'm in favor of accepting this paper. The paper is well written, with convincing experiments and an interesting ablation study. However I have several minor issues that I think could greatly improve the paper if addressed.\", \"minor_comments\": [\"I think an idea which is somewhat related but hasn't been mentioned in the paper, is the idea of adding noise to the input when training GANs [1]. I think this is worth mentioning in the related work.\", \"Related to the previous point, why penalizing features and not directly output ? What about also trying to classify the transformed images as real ? Also you say that penalizing the last layer, I think including the influence of m (eq 2) in the ablation study would be interesting.\", \"The authors provide some measure of standard deviation on some experiments but not on all of them. It would be nice to systematically report the standard deviation for every experiments.\", \"In figure 1 the author make the hypothesis that the discriminator will output very different score to images semantically close together. Did the author verify this hypothesis experimentally ?\", \"Also why penalizing only the samples from the real distribution and not from the generator ? have you tried both ?\", \"When the test accuracy of the discriminator is low, it could also be that the discriminator is under-fitting, it would be nice to also report the train accuracy for the discriminator.\", \"I think the conclusion about the effect of consistency regularization vs data augmentation is a bit vague since consistency regularization has no sense without data-augmentation.\", \"It's quite interesting but also disappointing that combining transformations doesn't give that much of an improvement. Do the author have any intuition why this is the case ? and why learning them one after the other would work ?\"], \"references\": \"[1] Arjovsky and Bottou. \\\"Towards Principled Methods for Training Generative Adversarial Networks.\\\" (ICLR 2017)\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The topic of this paper is out of the reviewer's domain (Bayesian optimization, RL, and neuroscience). The reviewer has been reviewing ICLR for several years. Such mismatches had not happened in the past.\\n\\nThe reviewer doesn't think this paper reached the bar of a good ICLR paper but hesitates to reject.\\n\\n\\nThis work proposed a training stabilizer for GANs based on the notion of Consistency Regularization. Experimentally, the authors had augmented data passed into the GAN discriminator and penalize the sensitivity of the ultimate layer of the discriminator to these augmentations.\\n\\nThe authors claimed \\\"We conduct a series of ablation studies to demonstrate that the\\nconsistency regularization is compatible with various GAN architectures and loss\\nfunctions. Moreover, the proposed simple regularization can consistently improve\\nthese different GANs variants significantly. \\\"\"}"
]
} |
SkxxtgHKPS | On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning | [
"Jian Li",
"Xuanyuan Luo",
"Mingda Qiao"
] | Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data. Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a). We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity. | [
"learning theory",
"generalization",
"nonconvex learning",
"stochastic gradient descent",
"Langevin dynamics"
] | Accept (Poster) | https://openreview.net/pdf?id=SkxxtgHKPS | https://openreview.net/forum?id=SkxxtgHKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"5V6YVsoFs",
"25AEPAUMhU",
"Hkxq3xWniB",
"rklasCehjr",
"SklyGyPqir",
"Ske3vo89or",
"H1gewIr9iS",
"H1gq7BHqoS",
"SJeMhEH5jr",
"BkeVXPeqsr",
"Bkl0toHSoB",
"SkeOi8jAYB",
"ryeCjnJAYS",
"HyxQ4HuatB",
"SkleD9Q3tH",
"Bye49BiYYr"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1581417564458,
1576798748685,
1573814450286,
1573813924968,
1573707526547,
1573706595852,
1573701207988,
1573700897973,
1573700777616,
1573680923747,
1573374853968,
1571890847756,
1571843238511,
1571812650650,
1571727959988,
1571562892231
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/Area_Chair1"
],
[
"ICLR.cc/2020/Conference/Paper2421/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2421/AnonReviewer1"
],
[
"~kento_nozawa1"
],
[
"ICLR.cc/2020/Conference/Paper2421/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2421/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2421/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Update\", \"comment\": \"Thanks for your comments! We have addressed your two concerns in our new version (see Appendix C.2).\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The authors provide bounds on the expected generalization error for noisy gradient methods (such as SGLD). They do so using the information theoretic framework initiated by Russo and Zou, where the expected generalization error is controlled by the mutual information between the weights and the training data. The work builds on the approach pioneered by Pensia, Jog, and Loh, who proposed to bound the mutual information for noisy gradient methods in a step wise fashion.\\n\\nThe main innovation of this work is that they do not implicitly condition on the minibatch sequence when bounding the mutual information. Instead, this uncertainty manifests as a mixture of gaussians. Essentially they avoid the looseness implied by an application of Jensen's inequality that they have shown was unnecessary.\\n\\nI think this is an interesting contribution and worth publishing. It contributes to a rapidly progressing literature on generalization bounds for SGLD that are becoming increasingly tight.\\n\\nI have one strong request that I will make of the authors, and I'll be quite disappointed if it is not executed faithfully.\\n\\n1. The stepsize constraint and its violation in the experimental work is currently buried in the appendix. This fact must be brought into the main paper and made transparent to readers, otherwise it will pervert empirical comparisons and mask progress.\\n\\n2. In fact, I would like the authors to re-run their experiments in a way that guarantees that the bounds are applicable. One approach is outline by the authors: the Lipschitz constant can be replaced by a max_i bound on the running squared gradient norms, and then gradient clipping can be used to guarantee that the step-size constraint is met. The authors might compare step sizes, allowing them to use less severe gradient clipping. The point of this exercise is to verify that the learning dynamics don't change when the bound conditions are met. If they change, it may upset the empirical phenomena they are trying to study. If this change does upset the empirical findings, then the authors should present both, and clearly explain that the bound is not strictly speaking known to be valid in one of the cases. It will be a good open problem.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your careful review and insightful comments!\\n\\nRegarding high-probability bounds, we note that our proof of Theorem 11 can be adapted to recover the previous bound of $O(LC \\\\sqrt{T}/n)$ in (Mou et al., 2018, Theorem 1), with the expected squared gradient norm term relaxed to $L^2$, using the uniform stability framework. Then, applying the recent results of [Feldman and Vondrak, 2019, Theorem 1] gives a generalization error bound of $\\\\tilde O(LC \\\\sqrt{T}/n + 1/\\\\sqrt{n})$ that holds with high probability. (Here $\\\\tilde O$ hides some polylog factors.) Since $T$ is typically at least linear in n, this means that the additional $1/\\\\sqrt{n}$ term will not be dominating.\\n\\nOn the other hand, for our new bound, which is derived from Bayes-Stability instead of uniform stability, it remains unknown whether it can be translated into a high-probability bound following a similar approach. We believe that it is an interesting open problem to prove a similar high-probability bound (with a small overhead) for the Bayes-Stability framework.\\n\\nWe will include the above discussion into the next version.\\n\\nIndeed, [Rivasplata et al.] also combines ideas from PAC-Bayes and stability. While their stability is actually the hypothesis stability measured by the distance on the hypothesis space. And their work stduies the case when the returned hypothesis (model parameter) is randomized by a Gaussian perturbation (i.e., the posterior $Q = \\\\mathcal{N}(A(S),\\\\sigma I)$), which is different from our work. Thanks for pointing this out. In the next version, we will discuss their work and modify our descriptions in the introduction and abstract.\\n\\n-------------------------------------\\nReference\\n\\nPAC-Bayes bounds for stable algorithms with instance-dependent priors. Rivasplata et al.\\n\\nHigh probability generalization bounds for uniformly stable algorithms with nearly optimal rate. Vitaly Feldman and Jan Vondrak.\\n\\nSharper bounds for uniformly stable algorithms. Bousquet et al.\\n\\nGeneralization bounds for uniformly stable algorithms. Feldman et al.\"}",
"{\"title\": \"Response to Area Chair #1\", \"comment\": \"Thanks for the comments.\\n\\nWe will answer your questions about the 2nd condition (small step size) stated in Theorem 11 as follows.\\n\\n(1) First, we remark the bound (Theorem 9) for GLD (a special case of SGLD) has NO constraint on the step size. \\n\\n(2) We need the step size constraint in the analysis of SGLD to bound the KL-divergence between two Gaussian mixtures for technical reasons. \\nSince there is no closed-form formula of KL-divergence between Gaussian mixtures, we need Lemma 21 to provide an approximation for it. And the step size assumption ($\\\\gamma_t \\\\leq \\\\sigma_t/(20L)$) in Theorem 11 is made for satisfying the 2nd condition of Lemma 21. We think it is possible to remove the constraint under certain other reasonable assumptions. \\n\\n(3) By the above discussion, we can in fact relax this assumption to $\\\\gamma_t \\\\leq \\\\sigma_t/(20 \\\\max_{i}\\\\left\\\\|\\\\nabla F(W_{t-1},z_i)\\\\right\\\\|_2)$ and the 2nd condition of Lemma 21 still holds. Note that the training gradient norm $\\\\max_{i}\\\\left\\\\|\\\\nabla F(W_{t-1},z_i)\\\\right\\\\|_2$ is usually much smaller than the global Lipschitz constant $L$. Moreover, in the later stage of training, the gradient norm becomes very small (see Figure 1(d) in our paper), which enables a larger step size. Thus, if one relaxes the constraint to $\\\\gamma_t \\\\leq \\\\sigma_t/(20 \\\\max_{i}\\\\left\\\\|\\\\nabla F(W_{t-1},z_i)\\\\right\\\\|_2)$, our bound still holds without change. \\n\\n(4) There are several other ways to further relax the step size constraint, by slightly adjusting the algorithm or analysis. For example, if the gradient clipping trick is used (i.e., multiplying $\\\\frac{\\\\min(C_L,\\\\left\\\\|\\\\nabla F(W_{t-1},z_i)\\\\right\\\\|_2)}{\\\\left\\\\|\\\\nabla F(W_{t-1},z_i)\\\\right\\\\|_2}$ to each $\\\\nabla F(W_{t-1},z_i)$, where $C_L$ is not very large), the constraint can be further relaxed to $\\\\gamma_t \\\\leq \\\\sigma_t/(20 C_{L})$ without affecting the bound. Replacing the constant $20$ with $2$ in this constraint will only increase the constant of our bound from $8.12$ to $84.4$. \\n\\nWe don't actually set learning rate according to the above constraints precisely in our experiments. Hence, the experiment setting showed in Figure 2 does not match perfectly with the conditions in Theorem 11. In our submission, we have explicitly admitted this point in Appendix C.2. Nevertheless, we suspect that the conclusion of our lemmas should still hold (or hold for most steps) even without step size constraints, under certain other reasonable assumptions. It is an intriguing future direction.\\n\\nFinally, we want to mention that many works that study the trajectories of SGD or SGLD require some assumption on small step sizes (e.g., [1][2][3][4] and many works in the optimization literature), since most such analysis are inherently local. For example, in [1], they make the same assumption on the step size ($\\\\gamma_t = O(\\\\sigma_t / L)$) in their stability bound for SGLD (see Theorem 1 in [1]). Nevertheless, it would be an interesting and important future work to try to relax or remove these constraints that do not perfectly match the practice. We will discuss the step size constraint in more details in the next version.\\n\\n[1] Mou et al., Generalization bounds of SGLD for non-convex learning: Two theoretical viewpoints\\n[2] Hardt et al., Train faster, generalize better: stability of stochastic gradient descent\\n[3] Kuzborskij et al., Data-Dependent Stability of Stochastic Gradient Descent\\n[4] Arora et al., Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks\"}",
"{\"title\": \"Question about bound\", \"comment\": \"I have a question regarding the bound obtained for SGLD and stated in Theorem 11. The bound relies on two assumptions. The first assumption does not seem to be very restrictive; however, the second assumption regarding the step size in terms of the Lipchitz constant is very restrictive.\\n\\nIn this respect, I have two questions:\\n1)\\tIf one wants to relax the constraint on the step size, what will be the generalization bound for SGLD? It seems that one requires this strong assumption for the current proof.\\n\\n2)\\tIn Figure 2 on Page 30, you have plots that show the value of your bound along optimization trajectories for different models and data sets. In deep learning models that you have considered (like AlexNet), how do you estimate the Lipchitz constant so that you can make sure that the step size satisfies the conditions of Theorem 11? If I understand correctly, in fact, the step size used to produce these optimization trajectories DOES NOT satisfying the step size constraint. As such, the bound is not even applicable to these trajectories and the bound cannot be corrected simply by multiplying it by some constant. It seems also possible that, were you to have simulated trajectories under the very restrictive step-size constraint, the distinction between random and true labels may have looked different. \\n\\nPerhaps I'm missing something critical, but I'd like to understand this aspect.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for the detailed comments. It will be good to add a brief summary of the second response about the upper bound, when this paper is published (in this ICLR or a future conference).\"}",
"{\"title\": \"Re: Response\", \"comment\": \"Thanks for your reply! It is indeed the third assumption. Sorry for this typo.\"}",
"{\"title\": \"Response to Review #2 (Part 1)\", \"comment\": \"Thanks for your careful review and insightful comments.\\n\\n1.\\\"What is the function $f$ of $f(w,\\\\mathbf{0})=0$\\\": Sorry for the typo. $f$ should be capital $F$. The zero data point $\\\\mathbf{0}$ is a synthetic data point (just a symbol). It is defined as a zero constant function (i.e., $F(w,\\\\mathbf{0}):= 0$ for all $w\\\\in\\\\mathbb{R}^d$). Note that we don't need to care about what the zero data point $\\\\mathbf{0}$ look like, because the only thing we use is its objective function's gradient $\\\\nabla F(w,\\\\mathbf{0})$ (which is also zero) in our analysis. The zero data point $\\\\mathbf{0}$ is constructed for the convenience of defining the prior distribution $P$.\"}",
"{\"title\": \"Response to Review #2 (Part 2)\", \"comment\": \"2.\\\"What made the generalization bound so loose?\\\": \\nThanks for pointing this out. We list some possible reasons that may explain why our bound is larger than the real generalization error.\\na) Note that our bound (Theorem 9 and 11) hold for any trajectory-based output (i.e., the output could be any function of the training path $(W_{0},W_{1},...,W_{T})$) such as exponential moving average, average of the suffix of certain length or any other functions (see Remark 12). This is a stronger statement than an upper bound of the generalization error. In our experiment, we use $W_T$ (the last step parameter) as the output, while our Theorem 9 and 11 are upper bounds for the worst case trajectory-based output which may be larger.\\nb) KL-divergence and non-optimal constants:\\nIn our Theorem 7, we use the KL-divergence to bound the total variational distance (Pinsker's inequality). This step may not be very tight.\\nSome of the constants such as $2\\\\sqrt{2}$ in Theorem 9 may not be very tight. \\nc) not large enough $\\\\sigma_t$ in our experiments: \\nNote the variance of Gaussian noise of SGLD (eq(1)) is very crucial to the actual bound. Our bound is much smaller if we use a not so small $\\\\sigma_t$. \\nHowever, we found in our experiment that if we choose a somewhat large variance, fitting the random lablelled training data can be extremely slow and we were not able to draw such a curve (the normal data can still be fit perfectly). \\nThus, we use a smaller noise level $\\\\sigma_t \\\\approx 0.3\\\\eta_t$ instead.\\nd) not large enough $n$:\\nIn fact, the data size we used is also very small ($n=10000$, it is not even a full mnist dataset) for the reason above (the convergence of training curve of the random labelled data is very slow when we use a somewhat larger variance, hence we choose a smaller subset of data).\\nHere we provide an extra experiment on the full Mnist dataset ($n = 60000$) without label corruption:\\n\\nstep | tra acc | gen_err | our bound\\n58 | 0.3000 | 0.0012 | 0.00845\\n116 | 0.6098 | 0.0014 | 0.01332\\n425 | 0.9006 | 0.0072 | 0.05081\\n\\nIn this case, our bound is not vacuous. \\nWe also want to mention that it might be very difficult to obtain nonvacuous theorectical bound for randomly labelled data (no matter how large the $n$ is). For instance, consider a 10-classification task contains $100\\\\%$ random labels. Since for any integer $n > 0$, one can find a deep neural network that can overfit the dataset. Thus, the training error is zero, and the generalization error is exactly the testing error, which must be larger than $90\\\\%$. Therefore, any non-vacuous generalization bound should be in the range $[0.9,1]$. If the proven constant is, say, only 10 times larger, then the bound is already much larger than 1 and it can't be reduced by increasing $n$. \\n\\nFinally, we remark that our primary goal in this paper is not to make our bound non-vacuous numerically. Nevertheless, we believe by further optimizing some constants and chosing experimental setting more carefully, we can achieve a much tighter numerical bound, and this is left as an interesting furture work.\\n\\n3.\\\"For deep neural networks,...\\\": Thanks for the insightful question. Indeed, our bound is for general nonconvex learning and connects the generalization error and the sum of empirical gradient norms along the training path. \\nUsing this connection, we plan to investigate some concrete deep learning models, such as MLP or ResNet. In this case, we may be able to bound the gradient norm by some architecture-dependent factors. \\nIn fact, some recent papers study the landscape of the loss function and the training trajectory (e.g., [1][2][3][4]). It might be possible to use the insight of these results, combined with our gradient-norm-based generalization bound, to derive generalization bounds that depend on factors of the specific neural network such as the width, the depth or the least eigenvalue of certain Gram matrix ([4]). This is an interesting future direction.\\n\\n[1] Zhu et al., A convergence theory for deep learning via over-parameterization\\n[2] Wu et al., Towards understanding generalization of deep learning: Perspective of loss landscapes\\n[3] Tian., An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis\\n[4] Arora et al., Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your reply. Perhaps you mean the third assumption of Assumption 35. I now see this assumption also holds with $m=\\\\lambda/2,b=L^2/(2\\\\lambda)$ and the regularization parameter can be relaxed. I am also happy to see Theorem 15 also holds for stochastic gradient.\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thanks for your careful review and insightful comments.\\nIndeed, the analysis can be extended to stochastic gradient (at the cost of an extra additive term) and the constant $1/2$ in the condition $\\\\lambda>1/2$ can be relaxed to any small constant $c>0$ by slightly changing the proof (for ease of calculation and convenience, we chose the constant $1/2$ in the original submission). Now, we explain the details.\\n\\n1.Extending Theorem 15 to stochastic gradient: The key step is to apply Lemma 36 in Appendix B.3. By Lemma 36, we have $KL(\\\\mu_{S,K}, \\\\nu_{S,\\\\eta K}) <= (C_0 \\\\beta \\\\delta + C_1 \\\\eta) K \\\\eta$, where $\\\\delta$ is the constant in the 4th condition of Assumption 35. In the full gradient case, the 4th condition of Assumption 35 holds with $\\\\delta = 0$. In the stochastic gradient case, it holds with $\\\\delta=1/(2 * \\\\textrm{batch size})$. Hence, we need an extra additive $2C\\\\sqrt{C_0K\\\\eta / \\\\textrm{batch size}}$ term in the generalization bound (eq(8)), when applying to stochastic gradient settings. Here a batch of data are i.i.d. drawn from full dataset $S$. Note that when batch size becomes larger, the extra term vanishes, and it matches the full gradient case bound (eq(8)).\\n\\n2.Relax the condition $\\\\lambda > 1/2$: Thanks for you to point it out! Here we only need to slightly modify our original proof of Theorem 15. In particular, we show that $\\\\lambda$ could be an arbitrary small positive number. Note that in the original proof of Theorem 15, $\\\\lambda > 1/2$ is only used for satisfying the 2nd condition of Assumption 35 (hold with $m = (2\\\\lambda-1)/(2)$, $b = L^2/2$, and m is required to be greater than 0 in this assumption, thus we set $\\\\lambda > 1/2$ in our original proof). Note that the 2nd condition also holds with $m = \\\\lambda /2$, $b = L^2/(2\\\\lambda)$, where $\\\\lambda$ could be any positive real number. Thus we only need to replace the statement \\\"Assumption 35 holds with...\\\" in page 26 with \\\"Assumption 35 holds with $A = C, B = L, m = \\\\lambda /2, b = L^2/(2\\\\lambda)$\\\" and change the upper bound of learning rate $\\\\eta$ in Theorem 15 from $(2\\\\lambda-1)/(8M^2)$ to $\\\\lambda/(8M^2)$.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper aims at developing a better understanding of generalization error for increasingly prevalent non-convex learning problems. For many such problems, the existing generalization bounds in the statistical learning theory literature are not very informative. To address these issues, the paper explores algorithm-specific generalization bounds, especially focusing on various types of noisy gradient methods.\\n\\nThe paper employs a framework that combines uniform stability and PAC-Bayesian theory to obtain generalization bound for the noisy gradient methods. For gradient Langevin dynamic (GLD) and stochastic gradient Langevin dynamics (SGLD), using this Bayes-Stability framework, the paper obtains a generalization bound on the expected generalization error that scales with the expected empirical squared gradient norm. As argued in the paper, this provides an improvement over the existing bounds in the literature. Furthermore, this bound enables the treatment of the setting with noisy labels. For this setting the expected empirical squared gradient norm along the optimization path is higher, leading to worse generalization bound. \\n\\nThe paper then extends their results to the setting where an $\\\\ell_2$ regularization is added to the non-convex objective. By using a new Log-Sobolev inequality for the parameter distribution at time t, the paper obtains new generalization bounds for continuous Langevin dynamic (CLD). These bounds subsequently provide bounds for GLD as well.\\n\\nThe paper demonstrates the utility of their generalization bound via empirical evaluation on MNIST and CIFAR dataset. The obtained generalization bounds are informative as they appear to capture the trend in the generalization error. \\n\\nOverall, the paper is very well written with a clear comparison with the existing generalization bounds. The results in the paper are interesting and novel. That said, the discussion in the introduction and abstract appears a bit misleading as it gives the impression that this is the first paper that combines the ideas from stability and PAC-Bayesian theory to obtain generalization bounds. This is not the case, e.g. see [1].\\n\\nAs noted by the authors, some of the bounds obtained in this paper share similarities with one of the bounds in Mou et al. as all these bounds contain the expected empirical squared gradient norm. The bound in Mou et al. holds with high probability and decays as $O(1/\\\\sqrt{n})$, whereas the bounds in this paper are on expected generalization error and decay as $O(1/n)$. Could authors comment on extending their results to hold with high probability and how it would affect their bounds?\\n\\n[1] Rivasplata et al., PAC-Bayes bounds for stable algorithms with instance-dependent priors.\\n\\n\\n----------------------- Post author response -------------\\n\\nThank you for addressing my comments. I have decided to keep my original score unchanged.\"}",
"{\"comment\": \"Dear Authors, I really sorry for confusing you by putting the wrong comments. I removed the first comment.\", \"title\": \"I really sorry for my comments\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors provide new generalization analysis of (stochastic) gradient langevin dynamics in a nonconvex learning setting. The results are largely based on and improves the analysis in Mou et al. (2018). In more details, Theorem 11 improves the corresponding generalization bound in Mou et al. (2018) by replacing the uniform Lipschitz constant by the expected empirical gradient norm, which can be smaller than the Lipschitz constant. The authors also argue this can distinguish normal data from randomly labelled data with experiments. The authors further studied the setting with an l_2 regularizer and derived improved result applicable to the case with infinite number of iterations, in which case the results in Mou et al. (2018) can diverge. These results are derived by a new bayes-stability method.\\n\\nA drawback is that the results are only applicable to gradient methods in Section 4, i.e., using all examples in the gradient calculation. It would be interesting to see how the generalization bound would be for the stochastic counterparts.\\n\\nThe authors assume \\\\lambda>1/2 in deriving (8). In practice, the regularization parameter should be set to be small enough to achieve a small test error. Therefore, eq (8) may not be quite interesting.\\n\\n----------------------\", \"after_rebuttal\": \"I have read the authors' response. I would like to keep my original score.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the generalization error bounds of stochastic gradient Langevin dynamics. The convexity of the loss function is not assumed. The author proposed \\\"Bayes-stability\\\" to derive generalization bound while taking the randomness of the algorithm into account. The generalization bound proposed in this paper applies to some existing problem setups. Also, the authors proposed the generalization bound of the continuous Langevin dynamics.\\n\\nThis is an interesting paper. Overall, the readability is high. The Bayes-stability is a significant contribution of this paper, and the theoretical analysis of the SGLD with non-Gaussian noise distribution will have a practical impact.\", \"some_comments_below\": [\"What is the function f of f(w,0)=0 above the equation (5)? Besides, the role of zero data point, i.e., f(w,0)=0, was not very clear.\", \"In the numerical results (b) and (c) of Figure 1, the scale in the y-axis was very different. What made the generalization bound so loose?\", \"In this paper, the developed theory was a general-purpose methodology. For deep neural networks, however, is there a meaningful insight obtained from the method developed in this paper?\"]}",
"{\"comment\": \"Thanks for your comments! However, we believe that our paper is not the one you want to comment on for the following reasons:\\n1. Our paper does not have Theorem 3.\\n2. The words \\\"sufficiency\\\" and \\\"minimality\\\" are not included in our submission.\", \"title\": \"Re:Maybe you commented on the wrong article\"}"
]
} |
SylkYeHtwr | SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models | [
"Yucen Luo",
"Alex Beatson",
"Mohammad Norouzi",
"Jun Zhu",
"David Duvenaud",
"Ryan P. Adams",
"Ricky T. Q. Chen"
] | Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions. | [
"latent variable models",
"estimator",
"unbiased estimation",
"log marginal probability",
"sumo",
"variational lower bounds",
"estimates",
"quantities",
"interest"
] | Accept (Spotlight) | https://openreview.net/pdf?id=SylkYeHtwr | https://openreview.net/forum?id=SylkYeHtwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"lpJWcFOdJr",
"rJea5tnioS",
"H1ldrdnssS",
"HJeJxFiijB",
"Bke1TwjiiB",
"SkxKwPjosS",
"HklgDk-gcB",
"rJgxdFWhFS",
"SJlZ3eOBYB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748655,
1573796245083,
1573795904074,
1573791975155,
1573791671213,
1573791585490,
1571979096109,
1571719528027,
1571287209200
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2419/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2419/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2419/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2419/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2419/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2419/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2419/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2419/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper proposes a new way to train latent variable models. The standard way of training using the ELBO produces biased estimates for many quantities of interest. The authors introduce an unbiased estimate for the log marginal probability and its derivative to address this. The new estimator is based on the importance weighted autoencoder, correcting the remaining bias using russian roulette sampling. The model is empirically shown to give better test set likelihood, and can be used in tasks where unbiased estimates are needed.\\n\\nAll reviewers are positive about the paper. Support for the main claims is provided through empirical and theoretical results. The reviewers had some minor comments, especially about the theory, which the authors have addressed with additional clarification, which was appreciated by the reviewers. \\n\\nThe paper was deemed to be well organized. There were some unclarities about variance issues and bias from gradient clipping, which have been addressed by the authors in additional explanation as well as an additional plot.\", \"the_approach_is_novel_and_addresses_a_very_relevant_problem_for_the_iclr_community\": \"optimizing latent variable models, especially in situations where unbiased estimates are required. The method results in marginally better optimization compared to IWAE with much smaller average number of samples. The method was deemed by the reviewers to open up new possibilities such as entropy minimization.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"PDF now updated\", \"comment\": \"Apologies, we did some last-minute editing after submitting the response. It is Appendix A.7 now. Thanks again for your valuable feedback.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your replies; they all sound good. It looks like you haven't updated the pdf yet, though.\\n\\nThe importance of gradient clipping in the different situations should be mentioned in the paper, as it seems to be practically important. I don't know exactly what's in your new Appendix A.6 but it sounds like that's a good step.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your careful review and valuable comments.\\n\\n1) Yes, we apply this to the expected terms with expectation taken over q(z;x). We\\u2019ve added the derivation in Appendix A.1.\\n\\n2) The expected computation cost is k=m + E[K] which was set to 5, 15, 50 for our experiments. For k=15, 50, we used the sampling distribution in Eq.(10), which has E[K] = 5, and we set m = k - 5. For the k=5 setting, we set K to be a geometric distribution with an expectation of 2 and set m=3.\\n\\n3) Yes, you\\u2019re right. For now, we\\u2019ve placed a sentence to clarify this. A clearer notation is also provided in Appendix A.1.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for going through the paper carefully and providing a positive feedback.\\n\\nYes, there are some existing works on discussing the appearance of \\u201cpseudo-bias\\u201d due to never actually sampling a very large $K$. There is some discussion relating to the harmonic mean estimator, which we have now cited in the paper. While this is definitely a potential problem, we slightly argue that even if such bias is empirically present, it should be small enough to be ignored because SUMO is capable of minimizing log marginal probability in practice.\\n\\nGradient clipping was not used in the other experiments except the density modeling ones, where we used it as a tool to obtain a better bias-variance tradeoff. In comparison, JVI has lower bias than IWAE but its high variance results in worse models, as reported by both the original paper and reflected in our experiments. For experiments involving entropy maximization, we found that any clipping results in unstable models. To better understand the bias-variance tradeoff, we have added a plot of the percentage of gradients clipped against the test NLL in Appendix A.6 (edit: A.7). We did not use this for grid-search as it was constructed after the submission. For our existing reported results, the percentage of clipped gradients is no more than 10%. \\n\\nThank you for your small notes. We will update the paper accordingly. Below are some select responses:\\n\\n-$\\\\Delta^g_k$ are *nearly* independent (or at least nearly uncorrelated)\\nYes, the discussion on independence is only because existing discussions on guaranteed finite variance and compute usually involve this assumption (e.g. [1,2]). We show in Appendices A.2 and A.3 that $\\\\mathbb{E}[\\\\Delta_i \\\\Delta_j]$ for $i\\\\neq j$ converges to zero much faster than $\\\\mathbb{E}[\\\\Delta_k^2]$.\\n\\n-$\\\\nabla \\\\mathbb{E}[\\\\text{SUMO}(x)]$ and $\\\\mathbb{E}[\\\\nabla\\\\text{SUMO}(x)]$:\\nYes, thank you for pointing this out. One can directly use the Dominated Convergence Theorem to show that $\\\\nabla \\\\mathbb{E}[\\\\text{SUMO}(x)] = \\\\mathbb{E}[\\\\nabla\\\\text{SUMO}(x)]$ so long as the function $\\\\text{SUMO}(x)$ is differentiable with finite derivative for all $x$ at the current model parameters. The proof will be less direct when non-everywhere-differentiable activations such as ReLUs are used. However, theorem 5 from the paper you reference can be directly used to prove this works for ReLUs in our setting, given the mild assumptions used for that theorem (which are necessary for other estimators such as IWAE to work). We have added a discussion on this in the paper.\\n\\n[1] \\u201cUnbiased Estimation with Square Root Convergence for SDE Models\\u201d. Rhee and Glynn.\\n[2] \\u201cEfficient optimization of loops and limits with randomized telescoping sums\\u201d Beatson and Adams.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your thoughtful review and comments. We\\u2019ve added comparisons to previous work on bias reduced estimators (e.g. jackknife variational inference (JVI) (Nowozin, 2018)) in the related work and experiments. We have cited multiple bias compensation works with RRE, across multiple areas of application, as well as some key works on marginal likelihood estimation. As the main problem we\\u2019re tackling is the optimization of objectives involving the _log_ marginal likelihood, we felt our existing related work section correctly positions our contributions within the literature; however, we are open to suggestions.\\n\\nThe regularization used during optimization (ie. gradient clipping) was only used for the density estimation tasks. We found this gave SUMO a good bias-variance tradeoff in terms of test performance, compared to existing biased estimators. On the other hand, JVI has theoretically lower bias than IWAE but a higher variance, and results in worse performance than IWAE. Experiments with SUMO on posterior inference and combinatorial optimization all used completely unbiased (gradient) estimators, because the bias in these situations can result in a \\u201crun-away\\u201d model which simply optimizes for the bias instead of the true objective. This effect happens for IWAE but not for SUMO (e.g. Figure 2).\\n\\nIt was shown in Beatson & Adams, 2019 that bounded variance and compute is guaranteed if $|\\\\Delta_k|^2$ vanishes faster than $O(k^{-2})$ and the sampling distribution is properly chosen. Empirically, we found that the variance is better than this theoretical bound required for finite variance. This empirical analysis is discussed in Appendix A.6. For now, we leave the theoretical proof of finite variance as an open problem and will look into it more in the future, as a thorough analysis can lead to improving the designs of SUMO or similar estimators.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors consider the unbiased estimation of log marginal likelihood (evidence) after integration of latent variable. On top of the importance-weighted autoencoder (IWAE), which is only guaranteed to be asymptotically unbiased, the authors propose to use Russian Roulette estimator (RRE) to compensate the bias caused by the finite summation.\\n\\nThe proposed method is interesting and can be applied in many other estimators with similar properties as Eq. (6). Bias compensation using RRE is interesting, but it seems there must be many literatures that took advantage of using RRE to improve estimators. The authors have to be thorough in presenting previous research and explaining the authors\\u2019 contribution that is distinguished from those.\\n\\nThe authors showed synthetic and real application of the estimator, but one concern is the variance. Unbiasedness with finite samples often fails because of the variance, and regularization is often useful rather than correcting bias---If unbiasedness in important, regularization definitely breaks the unbiasedness---. The only discussion about variance is a few lines in page 3 after Eq. (7), but it is unclear how the variance problem is mitigated or why the problem does not suffer high variance.\\n\\nIn general, this paper is well-written and dealing with important problem with interesting method. Several analysis for understanding the advantages of using the proposed method is insufficient.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an unbiased estimator of $\\\\log p_\\\\theta(x)$. Many unbiased estimators of $p_\\\\theta$ exist, but $\\\\log p_\\\\theta$ is needed in many other settings, some of which are not well-served by standard estimators of $p_\\\\theta$. The SUMO estimator is essentially a Russian roulette-based extension of IWAE; it is exactly unbiased, but takes a random and unbounded number of samples.\\n\\nThis allows marginally better optimization of certain models than IWAE with a much smaller average number of samples, and (more importantly) opens new possibilities such as entropy maximization which are not well-served by lower bounds like IWAE. This is a very nice advantage of this estimator.\", \"one_complaint_about_this_class_of_estimators_in_general_is\": \"yes, the exact SUMO procedure is technically unbiased. But in practice, if SUMO takes more than, say, a day of compute time \\u2013\\u00a0something that will happen with extremely small but nonzero probability \\u2013 then the user will kill it. And the SUMO estimator conditioned on taking less than a day of compute time is actually biased. This also likely means that, though unbiased, these estimators can be potentially skewed or otherwise \\\"unpleasant.\\\" For SUMO in particular, the estimator with $K$ truncated probably has bias bounded based on the bias of IWAE with batch size equal to the truncation point, which is likely quite small. But it would be nice to understand this a little more. (Perhaps it's been studied by some of the recent cited work on these types of estimators.)\\n\\nRelatedly, you don't prove that this estimator has a finite variance, and in fact it seems plausible theoretically that it might be infinite. Like the \\\"notorious\\\" harmonic mean estimator, this is troubling. It seems that things are okay in practice, but how can we tell whether the variance is really finite or not? I don't know if there's a good answer, but one diagnostic might be something like the one suggested at https://stats.stackexchange.com/a/9143 . (Your comments about occasional \\\"bounded but very large\\\" gradient estimates are troubling in this respect, depending on what exactly you mean by \\\"bounded\\\".) When you do gradient clipping, the estimator of course then has a finite variance, but can we get some sense of how much bias that introduces?\\n\\nOverall, though, I think this is a very nice new estimator that is both well-founded \\u2013 despite leaving some questions open \\u2013 and likely to be practically useful. Given that it is also extremely on-topic for ICLR and novel, I'm rating the paper as \\\"accept.\\\"\\n\\n(For slightly more detail on the \\\"thoroughness assessment\\\": I did not really check the proofs in the appendix, but did pay attention to the derivations in the main body.)\", \"smaller_notes\": [\"Top of page 3: you comment that SGD \\\"requires unbiased estimates of\\\" gradients of the log-density. In fact, SGD can be shown to work with biased gradient estimators, with suboptimality in the results depending on the bias; see e.g. Chen and Luss, http://arxiv.org/abs/1807.11880 .\", \"In the definition of $\\\\tilde{Y}$, above (7): it might make more sense to define $\\\\tilde{Y}$ with some recursive scheme, rather than as an estimator that either computes one of the $\\\\Delta$ values or infinitely many of them.\", \"Start of 3.1: presumably $\\\\Delta_k$ is what converges absolutely, not IWAE?\", \"Start of 3.2: as you note, it is clearly not true that the $\\\\Delta_k^g$ are independent. But you don't really \\\"assume independence\\\" \\u2013 the Russian roulette estimator is still a valid estimator, just perhaps not the optimal among that class. It would be better to say something like that since it seems that the $\\\\Delta_k^g$ are *nearly* independent (or at least nearly uncorrelated), the Russian roulette estimator is probably at least a reasonable choice.\", \"Re: the discussion after (9) and in (12), as well as a few other places: I think you show in Appendix A.3 that $\\\\mathbb E[ \\\\nabla \\\\operatorname{SUMO}(x) ]$ exists, but you don't show that it equals $\\\\nabla \\\\mathbb E[ \\\\operatorname{SUMO}(x) ]$. This is likely true, particularly if $q$ and $p$ are each everywhere-differentiable, and it's totally fine if you don't want to prove it out formally, but it would be worth at least a footnote that this is a thing that requires proof. (See e.g. https://arxiv.org/abs/1801.01401 Theorem 5, for a formal result of this type supporting ReLU activations, which you may be able to just use directly.)\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an unbiased estimator of marginal log likelihood given a latent variable model.\\nThe method extends the importance-weighted log marginal using the Russian roulette estimator. \\nThe marginal log probability estimator is motivated for entropy maximized density estimation and use of REINFORCE (log-derivative) gradient for learning a policy with a latent variable. \\n\\nThe paper is well-organized and provides a contribution for optimizing latent variable models in certain scenarios. \\nThus, I vote for its acceptance. \\n\\nSome questions follow.\\n1) Is it trivial to show the absolute convergence of \\\\Delta_k(x) series?\\nThe absolute convergence is mentioned above equation (8), I am not convinced of this point. \\nPerhaps, if its expectation with respect to q(z;x) is applied, this can be shown from equation (6). \\nOtherwise we need some assumption on q(z;x) like q(z;x) is reasonably close to p(z) or p(z|x).\\n\\n2) How was parameter m set for the experiments?\\n\\n3) I assume the expectation operator is taken over z and K in equations (9, 12). \\nIs this correct? An explicit notation should be informative.\"}"
]
} |
rkg0_eHtDr | Benefits of Overparameterization in Single-Layer Latent Variable Generative Models | [
"Rares-Darius Buhai",
"Andrej Risteski",
"Yoni Halpern",
"David Sontag"
] | One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i.e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i.e. generalization). In contrast, unsupervised settings have been under-explored, despite the fact that it has been observed that overparameterization can be helpful as early as Dasgupta & Schulman (2007). In this paper, we perform an exhaustive study of different aspects of overparameterization in unsupervised learning via synthetic and semi-synthetic experiments. We discuss benefits to different metrics of success (recovering the parameters of the ground-truth model, held-out log-likelihood), sensitivity to variations of the training algorithm, and behavior as the amount of overparameterization increases. We find that, when learning using methods such as variational inference, larger models can significantly increase the number of ground truth latent variables recovered. | [
"overparameterization",
"unsupervised",
"parameter recovery",
"rigorous experiments"
] | Reject | https://openreview.net/pdf?id=rkg0_eHtDr | https://openreview.net/forum?id=rkg0_eHtDr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"beqc2bOBH",
"B1gZcbv3sS",
"HygbeZDhsS",
"B1e0qeD2sS",
"r1g54dkzqH",
"HJeivC_0YH",
"HJledmXCtr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748627,
1573839241449,
1573839080918,
1573838998113,
1572104241965,
1571880547443,
1571857256155
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2417/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2417/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2417/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2417/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2417/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2417/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"This paper studies over-parameterization for unsupervised learning. The paper does a series of empirical studies on this topic. Among other things the authors observe that larger models can increase the number latent variables recovered when fitting larger variational inference models. The reviewers raised some concern about the simplicity of the models studied and also lack of some theoretical justification. One reviewer also suggests that more experiments and ablation studies on more general models will further help clarify the role over-parameterized model for latent generative models. I agree with the reviewers that this paper is \\\"compelling reason for theoretical research on the interplay between overparameterization and parameter recovery in latent variable neural networks trained with gradient descent methods\\\". I disagree with the reviewers that theoretical study is required as I think a good empirical paper with clear conjectures is as important. I do agree with the reviewers however that for empirical paper I think the empirical studies would have to be a bit more thorough with more clear conjectures. In summary, I think the paper is nice and raises a lot of interesting questions but can be improved with more through studies and conjectures. I would have liked to have the paper accepted but based on the reviewer scores and other papers in my batch I can not recommend acceptance at this time. I strongly recommend the authors to revise and resubmit. I really think this is a nice paper and has a lot of potential and can have impact with appropriate revision.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for your comments!\\n\\nRegarding (1), the rationale of the reviewer applies to the *likelihood* of the model we fit: the training-set likelihood should improve, and there is a potential that the test likelihood will drop (i.e. memorization happens). Our paper *does not* focus on this -- it focuses on *parameter recovery* -- here, it is entirely unclear why overparametrization should help, as there can be potentially many overparametrized models with equally good likelihood, but no relation to the ground truth parameters of the model whatsoever. This isn\\u2019t an issue of memorization -- rather in the presence of overparametrization, the model could in principle be un-identifiable (i.e. multiple sets of parameters give rise to the same distribution), so there is no a-priori reason why the optimization should prefer the ground truth parameters. \\n\\nMoreover, in our noisy-OR experiments, 128 latent variables are already 16 times more than the true number of latent variables, and at this level of overparameterization the performance is still much better than without overparameterization. Hence, we show that if there is a \\u201ccritical\\u201d amount of overparametrization at which performance starts to suffer, it may be quite large. \\n\\nRegarding (2): while it would be great to have theory accompanying our empirical observations, we note that theoretical analysis for our settings is likely to be very non-trivial given our current understanding of the optimization for these latent-variable models. For noisy-OR networks, the currently known algorithms with provable guarantees use tensor-based techniques and are very different from the gradient-descent algorithm used by us (e.g. see [1] and [2]). For sparse coding, the currently known results about gradient-descent like algorithms assume incoherence of the ground truth matrix, as well as the *iterates* of the algorithm (e.g. see [3] and [4]). It is clear that the iterates will not be incoherent in our setup due to the existence of near-duplicates -- so such techniques seem difficult to generalize.\\n\\nWe agree that a study of deep generative models would be very interesting, but we see this work as a necessary prerequisite. By focusing on the simplest linear (sparse coding) and non-linear (noisy-OR) models in which the beneficial effect of overparameterization manifests, it allowed us to determine precisely how variations in the parameters of the ground-truth model and algorithms affect it.\", \"there_are_several_key_challenges_in_moving_to_deep_generative_models\": \"1. Even basic questions of identifiability are not well understood. Specifically, for parameter recovery to even make sense, the underlying generative process (i.e., the parameters for the p(z,x) distribution) has to be identifiable from the marginal distribution p(x). Results in this vein are known for sparse coding and noisy-OR networks, but not for deep generative models; note that the recent papers on learning disentangled representations do not include synthetic experiments where data is drawn from a deep generative model and the resulting model is shown to be \\u201crecovered\\u201d.\\n2. Depending on the architecture, there are many different ways to overparametrize a deeper model. (One could overparametrize in terms of depth, width, in some structurally constrained way, etc.) \\n3. Designing filtering/variable extraction steps to recover the ground truth variables is entirely unclear. It\\u2019s likely that the outcome of the experiment will vary significantly depending on the implementation of this step, and the set of potential choices is vast.\\n\\n[1] Jernite, Yacine, Yonatan Halpern, and David Sontag. \\\"Discovering hidden variables in noisy-or networks using quartet tests.\\\" In Advances in Neural Information Processing Systems, pp. 2355-2363. 2013.\\n\\n[2] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Andrej Risteski. \\\"Provable learning of noisy-or networks.\\\" In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1057-1066. ACM, 2017.\\n\\n[3] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Ankur Moitra. \\\"Simple, efficient, and neural algorithms for sparse coding.\\\" (2015).\\n\\n[4] Chatterji, Niladri, and Peter L. Bartlett. \\\"Alternating minimization for dictionary learning: Local Convergence Guarantees.\\\" arXiv:1711.03634\"}",
"{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your comments. We reworded the \\u201cmaking precise\\u201d sentence to now read, \\u201ca controlled empirical study that measures and disentangles the benefits of overparameterization in unsupervised learning settings.\\u201d\", \"about_the_lack_of_precise_mathematical_statements\": \"while it would be great to have theory accompanying our empirical observations, we note that theoretical analysis for our settings is likely to be very non-trivial given our current understanding of the optimization for these latent-variable models. For noisy-OR networks, the currently known algorithms with provable guarantees use tensor-based techniques and are very different from the gradient-descent algorithm used by us (e.g. see [1] and [2]). For sparse coding, the currently known results about gradient-descent like algorithms assume incoherence of the ground truth matrix, as well as the *iterates* of the algorithm (e.g. see [3] and [4]). It is clear that the iterates will not be incoherent in our setup due to the existence of near-duplicates -- so such techniques seem difficult to generalize.\\n\\n[1] Jernite, Yacine, Yonatan Halpern, and David Sontag. \\\"Discovering hidden variables in noisy-or networks using quartet tests.\\\" In Advances in Neural Information Processing Systems, pp. 2355-2363. 2013.\\n\\n[2] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Andrej Risteski. \\\"Provable learning of noisy-or networks.\\\" In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1057-1066. ACM, 2017.\\n\\n[3] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Ankur Moitra. \\\"Simple, efficient, and neural algorithms for sparse coding.\\\" (2015).\\n\\n[4] Chatterji, Niladri, and Peter L. Bartlett. \\\"Alternating minimization for dictionary learning: Local Convergence Guarantees.\\\" arXiv:1711.03634\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"Thank you for your comments!\\n\\nWe agree that a study of deep generative models would be very interesting, but we see this work as a necessary prerequisite. By focusing on the simplest linear (sparse coding) and non-linear (noisy-OR) models in which the beneficial effect of overparameterization manifests, it allowed us to determine precisely how variations in the parameters of the ground-truth model and algorithms affect it.\", \"there_are_several_key_challenges_in_moving_to_deep_generative_models\": \"1. Even basic questions of identifiability are not well understood. Specifically, for parameter recovery to even make sense, the underlying generative process (i.e., the parameters for the p(z,x) distribution) has to be identifiable from the marginal distribution p(x). Results in this vein are known for sparse coding and noisy-OR networks, but not for deep generative models; note that the recent papers on learning disentangled representations do not include synthetic experiments where data is drawn from a deep generative model and the resulting model is shown to be \\u201crecovered\\u201d.\\n2. Depending on the architecture, there are many different ways to overparametrize a deeper model. (One could overparametrize in terms of depth, width, in some structurally constrained way, etc.) \\n3. Designing filtering/variable extraction steps to recover the ground truth variables is entirely unclear. It\\u2019s likely that the outcome of the experiment will vary significantly depending on the implementation of this step, and the set of potential choices is vast.\", \"on_theory\": \"while it would be great to have theory accompanying our empirical observations, we note that theoretical analysis for our settings is likely to be very non-trivial given our current understanding of the optimization for these latent-variable models. For noisy-OR networks, the currently known algorithms with provable guarantees use tensor-based techniques and are very different from the gradient-descent algorithm used by us (e.g. see [1] and [2]). For sparse coding, the currently known results about gradient-descent like algorithms assume incoherence of the ground truth matrix, as well as the *iterates* of the algorithm (e.g. see [3] and [4]). It is clear that the iterates will not be incoherent in our setup due to the existence of near-duplicates -- so such techniques seem difficult to generalize.\\n\\n[1] Jernite, Yacine, Yonatan Halpern, and David Sontag. \\\"Discovering hidden variables in noisy-or networks using quartet tests.\\\" In Advances in Neural Information Processing Systems, pp. 2355-2363. 2013.\\n\\n[2] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Andrej Risteski. \\\"Provable learning of noisy-or networks.\\\" In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1057-1066. ACM, 2017.\\n\\n[3] Arora, Sanjeev, Rong Ge, Tengyu Ma, and Ankur Moitra. \\\"Simple, efficient, and neural algorithms for sparse coding.\\\" (2015).\\n\\n[4] Chatterji, Niladri, and Peter L. Bartlett. \\\"Alternating minimization for dictionary learning: Local Convergence Guarantees.\\\" arXiv:1711.03634\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper performs empirical study on the influence of overparameterization to generalization performance of noisy-or networks and sparse coding, and points out overparameterization is indeed beneficial. I find the paper has some drawbacks.\\n\\n1. Overparameterization is better than underparamterization and exact parameterization is not surprising. The question is how much do we need to overparameterize. As the number of parameters goes to infinity, the model can eventually remember all the training data, and has poor generalization. The real interesting question to ask is how to use an excessive amount of parameters, yet still avoid overfitting.\\n\\n2. The discussed models are too simple. I am expecting some theoretical analysis for tasks simple as noisy-or and sparse coding, or some experiments for more complicated (deep) models need to be done, to make the paper more solid.\\n\\nUpdate\\n=====\\n\\nThank the authors for the response. The authors do address my comment #1. I agree that overparameterization improves recovery is a new finding. However, I still think the \\\"information gain\\\" of this paper is somewhat thin. There could be at least some intuitions on why overparameterization helps noisy-or models. I think the analysis can be more in-depth to make this paper more interesting. \\n\\nI would like to raise my score a bit to a \\\"neutral\\\" score, but given the current scoring system I'll just keep my score.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper \\u201caims to be a controlled empirical study making precise the benefits of overparameterization in unsupervised learning settings. \\u201d The author\\u2019s empirical study is comprehensive, and to my knowledge the most detailed published work on this to date. Specifically, the authors empirically study\\n- the ability of networks to recover latent variables\\n- the effects of extreme overparameterization\\n- the effects the training method (e.g. batch size)\\n- latent variable stability over the course of training\\n\\nIn line with the findings for supervised settings, the authors find that overparameterization is often beneficial, and that overfitting is a surprisingly small issue. This is an interesting and useful observation, particularly since it at first sight appears to be in disagreement with some earlier work (the authors suggest explanations for the differing observations). \\n\\nAs the authors point out (and I agree), the paper constitutes a compelling reason for theoretical research on the interplay between overparameterization and parameter recovery in latent variable neural networks trained with gradient descent methods. \\n\\nThe authors perform studies on a range of different real-world and synthetic datasets. \\n\\nThe paper is well-written, well-structured, and easy to follow. Relevant literature has been cited. The appendices contain a wealth of details that will make this work reproducible.\", \"decision\": \"weak accept. The paper contains some new insights, but its contributions are not quite as substantial (e.g. lack of precise mathematical statements) or surprising as those in stronger ICLR papers.\", \"a_small_gripe\": \"the authors promise \\u201c a controlled empirical study making precise the benefits of overparameterization in unsupervised learning settings\\u201d. I would argue that \\u201cmaking precise\\u201d is too strong for what the paper actually delivers. I suggest rewording this.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper investigates benefit of over-parameterization for latent variable generative model while existing researches typically focus on supervised learning settings. It is experimentally shown that the over-parameterization helps to obtain better optimization, but too much over-parameterization gives performance deterioration. In the numerical experiments, the effect of over-parameterization is investigated from several aspects.\\n\\nThe motivation of this paper is interesting. The writing of the paper is clear, and I could follow the contents easily.\\n\\nOn the other hand, I have the following concerns on the significance of the paper.\\n- All datasets investigated in this paper are rather small. If there were thorough investigations on more modern deep generative models, then the paper would be stronger. For example, the latent variable model is recently well discussed in the context of disentanglement representation. The generative models to obtain disentanglement representation could be investigated in the frame-work of this paper.\\n- This is an empirical study, but if there was theory to support the empirical observations, then the paper was more convincing. The problem itself is just a sparse coding problem. Hence, I think what investigated in this paper can be discussed by relating sparse coding theories. However, there is no theoretical justification on the experimental results.\\n- Summarizing the above arguments, the insight obtained in this paper is a bit weak. More ablation study and more experiments on general models will clarify what is going on in the over-parameterized model for latent generative models.\"}"
]
} |
SkxaueHFPB | Implicit competitive regularization in GANs | [
"Florian Schaefer",
"Hongkai Zheng",
"Anima Anandkumar"
] | Generative adversarial networks (GANs) are capable of producing high quality samples, but they suffer from numerous issues such as instability and mode collapse during training. To combat this, we propose to model the generator and discriminator as agents acting under local information, uncertainty, and awareness of their opponent. By doing so we achieve stable convergence, even when the underlying game has no Nash equilibria. We call this mechanism \emph{implicit competitive regularization} (ICR) and show that it is present in the recently proposed \emph{competitive gradient descent} (CGD).
When comparing CGD to Adam using a variety of loss functions and regularizers on CIFAR10, CGD shows a much more consistent performance, which we attribute to ICR.
In our experiments, we achieve the highest inception score when using the WGAN loss (without gradient penalty or weight clipping) together with CGD. This can be interpreted as minimizing a form of integral probability metric based on ICR. | [
"GAN",
"competitive optimization",
"game theory"
] | Reject | https://openreview.net/pdf?id=SkxaueHFPB | https://openreview.net/forum?id=SkxaueHFPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"ugWQ9NTxBn",
"SJx5UCR_iS",
"ryeDKp0_ir",
"H1gYHaCdjH",
"SkeEkTA_iH",
"rJxILrkIir",
"SkxA5ukeoB",
"H1xrzuygiS",
"HyeCPFQo9r",
"BkeoDK4qqH",
"B1e_HcwAFr",
"B1xuDl80YB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748597,
1573609041656,
1573608830870,
1573608769496,
1573608667699,
1573414221746,
1573021845969,
1573021709274,
1572710758081,
1572649315390,
1571875391704,
1571868768036
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2416/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2416/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2416/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2416/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes to study \\\"implicit competitive regularization\\\", a phenomenon borne of taking a more nuanced game theoretic perspective on GAN training, wherein the two competing networks are \\\"model[ed] ... as agents acting with limited information and in awareness of their opponent\\\". The meaning of this is developed through a series of examples using simpler games and didactic experiments on actual GANs. An adversary-aware variant employing a Taylor approximation to the loss.\\n\\nReviewer assessment amounted to 3 relatively light reviews, two of which reported little background in the area, and one more in-depth review, which happened to also be the most critical. R1, R2, R3 all felt the contribution was interesting and valuable. R1 felt the contribution of the paper may be on the light side given the original competitive gradient descent paper, on which this manuscript leans heavily, included GAN training (the authors disagreed); they also felt the paper would be stronger with additional datasets in the empirical evaluation (this was not addressed). R2 felt the work suffered for lack of evidence of consistency via repeated experiments, which the authors explained was due to the resource-intensity of the experiments. \\n\\nR5 raised that Inception scores for both the method and being noticeably worse than those reported in the literature, a concern that was resolved in an update and seemed to center on the software implementation of the metric. R5 had several technical concerns, but was generally unhappy with the presentation and finishedness of the manuscript, in particular the degree to which details are deferred to the CGD paper. (The authors maintain that CGD is but one instantiation of a more general framework, but given that the empirical section of the paper relies on this instantiation I would concur that it is under-treated.)\\n\\nMinor updates were made to the paper, but R5 remains unconvinced (other reviewers did not revisit their reviews at all). In particular: experiments seem promising but not final (repeatability is a concern), the single paragraph \\\"intuitive explanation\\\" and cartoon offered in Figure 3 were viewed as insufficiently rigorous. A great deal of the paper is spent on simple cases, but not much is said about ICR specifically in those cases. \\n\\nThis appears to have the makings of an important contribution, but I concur with R5 that it is not quite ready for mass consumption. As is, the narrative is locally consistent but quite difficult to follow section after section. It should also be noted that ICLR as a venue has a community that is not as steeped in the game theory literature as the authors clearly are, and the assumed technical background is quite substantial here. For a game theory novice, it is difficult to tell which turns of phrase refer to concepts from game theory and which may be more informally introduced herein. I believe the paper requires redrafting for greater clarity with a more rigorous theoretical and/or empirical characterization of ICR, perhaps involving small scale experiments which clearly demonstrates the effect. I also believe the authors have done themselves a disservice by not availing themselves of 10 pages rather than 8.\\n\\nI recommend rejection at this time, but hope that the authors view this feedback as valuable and continue to improve their manuscript, as I (and the reviewers) believe this line of work has the potential to be quite impactful.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We would like to thank all reviewers for their assessment and feedback. The main purpose of this work was to illustrate a novel mechanism that could stabilize GAN training without the need for Lipschitz-regularization (for instance, through gradient penalties) and we were happy to see that this idea was appreciated by most reviewers.\", \"it_seemed_to_us_that_there_were_two_main_concerns_from_the_side_of_the_reviewers\": \"(1:) Reviewer #5 pointed out that the inception score for WGAN-GP in our publication is significantly below the one reported in the original WGAN-GP publication. As detailed below this was due to the use of a pytorch-- as opposed to tensorflow implementation of inception using a slightly different model. We computed tensorflow inception scores for WGAN-GP and CGD and observe that they match the results reported in the literature with CGD still outperforming WGAN-GP trained with Adam (for more details, see our response to reviewer #5).\\n\\n(2:) Reviewers #1 and #2 were concerned about the extent to which our contribution goes beyond the existing work on CGD. We emphasize that the main contribution of this work is the presentation of a novel mechanism for the stabilization of GAN training, of which CGD is just one manifestation. In particular, most of section 4 describes a general framework for the utilization of implicit competitive regularization. By modelling the information accessible to the agents, their way of handling uncertainty, and their anticipation of the other players action, multiple existing algorithms can be recovered. We furthermore hope that this framework will be useful in guiding the development of novel algorithms. \\n\\nWe are thankful for any further comments.\"}",
"{\"title\": \"Response to review #1\", \"comment\": \"We thank the reviewer for their positive assessment and helpful feedback.\\n\\nWe believe that the contribution of the present work is complementary to that of the initial work on CGD. While CGD proposes an algorithm for general competitive optimization problems, the present work investigates how the modeling of generator and discriminator as agents acting under uncertainty and opponent awareness can stabilize GANs, even for seemingly unstable loss functions.\\nCGD is a particularly clean instance of this phaenomenon, which is why we study its implicit competitive regularization in more detail. \\nHowever, the insights into the stabilization of GAN training, which we consider the main contribution of the present work, are not restricted to CGD. Instead, we hope that they encourage a new approach to devising optimization algorithms for GANs. See also our response to Reviewer #2.\"}",
"{\"title\": \"Response to review #2\", \"comment\": \"We thank the reviewer for their positive assessment and helpful feedback.\\n\\nRegarding\\n==============================================================\\n\\u201dThe paper gives background and intuition to solidifying the CGD update. Are there additional algorithmic approaches that are possible and potentially more efficient with this understanding in mind? This I believe would help solidify the paper and build beyond CGD.\\u201d\\n==============================================================\\n\\nIndeed, CGD is just one instance of implicit regularization that maps particularly well onto the discussions in section 2 and 3, \\nOur hope is that this work spurs the development of new algorithms that emphasize the game-theoretic modeling of generator and discriminator acting under uncertainty and awareness of their opponent.\\nThe framework we give in section 4 is meant to serve as a starting point to this end, and also to relate our work to existing methods in the literature, such as LOLA. Since a discussion of all the different algorithms that could be built on these ideas is beyond the scope of the paper, we restrain ourselves to treating CGD, as an example.\", \"regarding_the_remaining_comments\": [\"Repeating the entire experiments over many runs is very computationally expensive, which is why we singled out one setting (OGAN-DROPOUT) to repeat over seven runs (first panel of figure 5). If there are other configurations that you would find particularly enlightening to see over multiple runs, we will be happy to add them to the paper.\", \"Under optimal implementation (mixed mode automatic differentiation), the number of backward passes provided in the second panel of figure 4 should provide a good proxy for the time complexity. As we can see in this plot, for different time budgets, different methods will yield better results. In the limit of large time budgets however, ACGD-WGAN seemed to perform best.\", \"Our experiments in figures 4 and 5 suggest that in the limit of many training iterations, models trained with ACGD tend to saturate at higher inception score.\", \"This is a nice idea and a great example of the kind of thinking we wanted to encourage with this work!\"]}",
"{\"title\": \"Response to review #3\", \"comment\": \"We thank the reviewer for the positive assessment. In particular, we are very pleased to hear that the reviewer found Sections 2 and 3 compelling, since we consider them central to our work.\\nWe also thank the reviewer for the suggestions for improvement.\\nWe are doing additional proofreading and added a sentence describing the vertical axis in Figure 1. While we agree that human scores would be ideal this is unfortunately not practical, which is why we settled for the inception score, which is a popular measure of sample quality in the field.\"}",
"{\"title\": \"Update: Tensorflow inception score\", \"comment\": \"We have rerun ACGD-WGAN-NOREG and ADAM-WGAN-GP and computed the official tensorflow inception scores, as opposed to the pytorch one. ADAM-WGAN-GP now reaches inception score of approximately 6.5, consistent with the scores reported in Figure 3 of the WGAN-GP paper. (See our reply below for a more detailed discussion).\\n\\nACGD-WGAN-NOREG reaches tensorflow inception score of approximately 7.1-7.3 and thus still improves upon ADAM-WGAN-GP, just as with the Pytorch inception score. We expect the remaining conclusions of our experiments to stay the same when changing from pytorch to tensorflow inception score, although rerunning all experiments may take some time.\\nPlots, code, and model can be accessed under https://drive.google.com/drive/folders/10M0keSN47PfWi4-L6bdQfXWx4BSTs2c3\"}",
"{\"title\": \"Discussion of why CGD acts as a regularizer\", \"comment\": \"Please see below for our response to the concerns regarding the discussion why CGD acts as a regularizer:\", \"the_review_states\": \"===========================================================\\n\\u201cThe authors state:\\n\\u201cIf some of the singular values of Dxy are very large, this amounts to approximately restricting the update to the orthogonal complement of the corresponding singular vectors\\u201d \\n\\nI don\\u2019t see how this is the case. The terms Dxy/Dyx aren\\u2019t really introduced or defined anywhere in this work. Assuming is the transpose of the other (?) then the update direction is:\\nA + B where A=inv(S) grad_x and B = inv(S)Dxy grad_y (and S = I + Dxy Dyx). So we have a term which is being affected by the smallest singular values of S and a term which is the orthogonal projection of grad_y onto Dxy, alternatively the ridge-regression fit of grad_y on Dxy which would attenuate directions corresponding to the *small* singular values (as is well known from the theory of ridge regularizers). I feel like there is much more to say here than what is discussed in the paper in very vague terms.\\u201d\\n===========================================================\\n\\nWe apologize for omitting the definition of the $D_{xy}^2f$, which refers to the matrix containing the partial derivatives $\\\\frac{\\\\partial^2f}{\\\\partial x_{i} \\\\partial y_{j}}$ of the objective function. Under mild regularity assumptions, $D_{xy}^2$ and $D_{yx}^2$ are indeed transposes of each other (Schwarz\\u2019s theorem).\\nBoth A and B attenuate directions corresponding to **large** singular values of $D_{xy}^2$. The intuitive explanation is that they both have a square of $D_{xy}^2$ appearing in the denominator (i.e as a matrix inverse), but at most a single factor of $D_{xy}$ appearing in the numerator.\\nBy developing $x$ and $y$ in the basis given by the left resp. right singular vectors of $D_{xy}^2$ this argument is made rigorous by observing that the component of $\\\\nabla_x$ that corresponds to the singular value $\\\\sigma$ is attenuated by a factor $1/(1+\\\\sigma^2)$ while the corresponding component of $\\\\nabla_y$ is attenuated by a factor $\\\\sigma/(1 + \\\\sigma^2)$.\\nWe note that $D_{yx} B$ and not B is the (regularized) projection of grad_y onto the row space of Dxy.\", \"the_review_further_states\": \"===========================================================\\n\\u201cOf course the effective rank of S, or the rate of decay of its singular values is crucially important. In practise I would assume the smaller SVs of Dxy to be difficult to estimate or the matrix to be rank deficient in which case they would simply be unity in the inverse whereas the directions corresponding to large singular values would be attenuated. So in this case it is the regularized orthogonal complement but its not clear (if the matrix is not full rank) that it is a meaningful direction (and again this is all highly dependent on the effective rank, too).\", \"further_on_it_is_mentioned\": \"\\u201cFor smoothly varying singular vectors, this can be thought of as approximately constraining the trajectories to a manifold of robust play\\u201d.\\n\\nFirst it is not at all clear to me what \\u201csmoothly varying singular vectors\\u201d are. Varying with respect to what? Secondly, the \\u201cmanifold of robust play\\u201d has not been defined anywhere.\\u201d\\n===========================================================\\n\\nThe projection onto the orthogonal complement of $D_{xy}f$ is meaningful since it corresponds to strategies of one player, the effect of which (to leading order) does not depend on the move of the other player.\\nIn an attempt to provide additional intuition, we propose to think of these directions as constituting the tangent space of a \\u201cmanifold of robust play\\u201d, that is a manifold of strategies on which the players can move around by only playing strategies that are robust in the sense that their payoff is (to leading order) unaffected by arbitrary simultaneous moves of the other player.\\nWe apologize if this intuition was not helpful and will try to add additional clarifications. \\n\\nLastly, the review states:\\n===========================================================\\n\\u201cFinally, figure 3 is quite bizarre to me. None of the quantities have been rigorously defined and so it seems like the relative effect of each of the arrows and the manifold have been drawn arbitrarily in order to fit the story, rather than to actually illuminate the true behaviour in an intuitive manner. \\nB (defined above) has a very clear interpretation as a least-squares fit so I figure that any geometric interpretation of the CGD update direction could start from there.\\u201d\\n===========================================================\\n\\nThe first panel of figure 3 is indeed not a plot, but an illustration of how the two regularization terms in CGD could prevent overtraining, proposing an explanation for the empirical observation that CGD does not \\u201covertrain\\u201d the way that Adam does. We apologize that this was not clear and will add some additional clarification.\"}",
"{\"title\": \"Experiments on inception scores\", \"comment\": \"Thank you for your review. We disagree with your assessment but believe that it can be explained by misunderstandings as outlined below. We will add additional clarifying remarks along the detailed response below that will hopefully prevent this from happening in the future.\", \"we_will_first_address_the_concerns_about_the_experiments_on_inception_scores\": \"\", \"the_review_reads\": \"===========================================================\\n\\u201cI find the reporting of Inception Score highly suspect. The authors choose WGAN-GP as a baseline and report scores of ~4.5 vs ~5.5 with their modification. However the WGAN-GP paper reports an IS of 7.86 on CIFAR. Furthermore, current GAN SOTA on CIFAR is approaching IS=9. I am not making the argument that the authors ought to demonstrate SOTA results, however they should at least present results which are consistent with the published results of their chosen baseline.\\u201d\\n===========================================================\\n\\nThe inception score of 7.86 in the WGAN-GP paper was achieved with a larger ResNet architecture and measured using the tensorflow inception score (IS) (Table 3 in the WGAN-GP paper).\\nFor our experiment, we used an existing pytorch port ( https://github.com/EmilienDupont/wgan-gp/blob/master/models.py ) of the DCGAN structure in the WGAN-GP repository ( https://github.com/igul222/improved_wgan_training/blob/master/gan_cifar.py ) that was used to produce Figure 3 in the WGAN-GP paper and which is only reported to achieve IS of about 6, and reported the pytorch IS. The pytorch IS can be ~5-10% lower than the tensorflow IS (this is reported, for instance, by https://github.com/ajbrock/BigGAN-PyTorch/blob/master/inception_utils.py ), which puts us in the ballpark of the results reported in the WGAN-GP paper. We are in the process of computing tensorflow IS and FIDs for select runs.\", \"the_review_furthermore_reads\": \"===========================================================\\n\\u201cThe authors then make this statement:\\n\\u201cThus, its superior performance supports the claim that ICR is the appropriate form of regularization for GANs. We emphasize that in our experiments we did not perform any architecture or hyperparameter tuning, and instead use a model intended to be used with WGAN gradient penalty\\u201d\\nThis does not hold, since the numbers reported are far below the actual baseline.\\u201d\\n===========================================================\\n\\nAs outlined above the numbers that we report for the baseline methods are similar to those in the literature. We furthermore emphasize that the scientific purpose of these experiments is to test whether implicit competitive regularization (ICR) is present in GANs and whether it provides a more appropriate way of regularizing GAN training. To this end we show WGAN-loss without regularization is not only stable when using CGD (which runs against the arguments proposed in the original WGAN papers), but that CGD also leads to improved performance and robustness compared to other optimizers and regularization methods.\\nFor this argument, it is more important that the architecture and parameters are reasonable and not cherrypicked (which is why we choose them from the literature on WGAN-GP), rather than whether they achieve \\u201cSOTA\\u201d inception score.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"https://openreview.net/pdf?id=SkxaueHFPB\\n\\nThe paper has some interesting ideas but I don\\u2019t think any of them are fully fleshed out.\\n\\nI find the reporting of Inception Score highly suspect. The authors choose WGAN-GP as a baseline and report scores of ~4.5 vs ~5.5 with their modification. However the WGAN-GP paper reports an IS of 7.86 on CIFAR. Furthermore, current GAN SOTA on CIFAR is approaching IS=9. I am not making the argument that the authors ought to demonstrate SOTA results, however they should at least present results which are consistent with the published results of their chosen baseline.\", \"the_authors_then_make_this_statement\": \"\\u201cThus, its superior performance supports the claim that ICR is the appropriate form of regularization for GANs. We emphasize that in our experiments we did not perform any architecture or hyperparameter tuning, and instead use a model intended to be used with WGAN gradient penalty\\u201d\\nThis does not hold, since the numbers reported are far below the actual baseline.\\n\\nBesides this major point, I am unconvinced by some of the mathematical statements in the paper. Much of the mathematical details are deferred to the original CGD paper. It is not really particularly reader-friendly to defer that to the CGD paper since they are seemingly crucial to the discussion here. Relative to the CGD paper some signs have been flipped and some definitions appear to be used in subtly different ways which makes for a very difficult read. I feel that far too much has been left as an exercise to the reader.\", \"concretely_my_concerns_refer_to_the_main_discussion_of_the_effect_of_the_cgd_as_a_regularizer\": \"\", \"the_authors_state\": \"\\u201cIf some of the singular values of Dxy are very large, this amounts to approximately restricting the update to the orthogonal complement of the corresponding singular vectors\\u201d \\n\\nI don\\u2019t see how this is the case. The terms Dxy/Dyx aren\\u2019t really introduced or defined anywhere in this work. Assuming is the transpose of the other (?) then the update direction is:\\nA + B where A=inv(S) grad_x and B = inv(S)Dxy grad_y (and S = I + Dxy Dyx). So we have a term which is being affected by the smallest singular values of S and a term which is the orthogonal projection of grad_y onto Dxy, alternatively the ridge-regression fit of grad_y on Dxy which would attenuate directions corresponding to the *small* singular values (as is well known from the theory of ridge regularizers). I feel like there is much more to say here than what is discussed in the paper in very vague terms. \\n\\nOf course the effective rank of S, or the rate of decay of its singular values is crucially important. In practise I would assume the smaller SVs of Dxy to be difficult to estimate or the matrix to be rank deficient in which case they would simply be unity in the inverse whereas the directions corresponding to large singular values would be attenuated. So in this case it is the regularized orthogonal complement but its not clear (if the matrix is not full rank) that it is a meaningful direction (and again this is all highly dependent on the effective rank, too).\", \"further_on_it_is_mentioned\": \"\\u201cFor smoothly varying singular vectors, this can be though of as approximately constraining the trajectories to a manifold of robust play\\u201d.\\n\\nFirst it is not at all clear to me what \\u201csmoothly varying singular vectors\\u201d are. Varying with respect to what? Secondly, the \\u201cmanifold of robust play\\u201d has not been defined anywhere. \\n\\nFinally, figure 3 is quite bizarre to me. None of the quantities have been rigorously defined and so it seems like the relative effect of each of the arrows and the manifold have been drawn arbitrarily in order to fit the story, rather than to actually illuminate the true behaviour in an intuitive manner. \\nB (defined above) has a very clear interpretation as a least-squares fit so I figure that any geometric interpretation of the CGD update direction could start from there.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper analyzes instability in training GANs, relates it to Nash equilibria, and proposes a novel training set-up based on competitive nashless games. The solution is related to other recently proposed work, but the paper brings additional insights into understanding it.\\n\\nThe analysis on conditions that lead to divergence or convergence, and of the proposed solution, are interesting. I recommend accepting.\\n\\nI have some basic knowledge of GANs but am not deeply familiar with the field. The paper was accessible to me on a high level. Especially compelling to me were sections 2 and 3. The empirical study also seemed to yield positive results.\", \"suggestions_for_improvement\": [\"Additional proofreading would be beneficial.\", \"The scale of the axes in figure 1 is not clear, making it a little less compelling.\", \"Inception score is used as the only evaluation metric for the generators. Perhaps this is standard in the field, although human ratings would seem more reliable to me.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a novel training methodology for GANs to improve stability. The resulting regularization, termed implicit competitive regularization, updates the parameters of both the generator and the discriminator to be robust to one another. A framework for practical application of this approach is described -- this is done by a local Taylor approximation of the loss and updating each model\\u2019s parameters to this approximate model\\u2019s nash equilibrium. The method is shown to prevent overfitting and produce high-performing models with consistent training.\\n\\nThe approach and insights are reasonable and the problem is worthwhile to approach. The method is clear and the associated code is appreciated. The results are interesting in terms of describing the ICR property and demonstrating its performance. \\n\\nThe paper gives background and intuition to solidifying the CGD update. Are there additional algorithmic approaches that are possible and potentially more efficient with this understanding in mind? This I believe would help solidify the paper and build beyond CGD.\", \"some_additional_results_that_could_clarify_the_benefits\": [\"A primary contribution of the training approach is training consistency. The distribution over many training runs should be provided in figures.\", \"Clearly due to the additional gradient calls the approach is computationally slower, as shown in Figure 4. If each approach is trained for the same amount of time, how does the performance compare?\", \"One may expect that the update may result in more conservative updates and thus potentially lower-performing policies in the limit. If there iterations were instead log-scale to show performance in high training iterations, is there any loss of performance in top-performing runs?\", \"Could a similar approach be used to allow safe gradient updates according to a risk over the opponent\\u2019s possible updates, e.g., via CVar? This may also be a stable training procedure as well with less conservatism.\", \"The paper should be proofread, there are several minor typos throughout, e.g.:\", \"\\u201cgenerators producing that produce good\\u201d -> generators that produce good\", \"\\u201cThis game is very similar similar\\u201d -> repeated word\", \"\\u201cGAN trainin.\\u201d\"]}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper presents a new way of regularization in Generative Adversarial Network (GAN). It is well known that a naive training of GAN can fail to converge. Although GAN is relatively a new concept, many papers tried to introduce a good way of stabilizing GAN training. I believe that this paper is addressing the stability issues in the most fundamental and effective ways. The paper utilizes Competitive Gradient Descent proposed by Sch\\u00e4fer and Anandkumar in 2019 in training GAN. The intuition is that both players should predict what their opponent is going to do. This results in a convergence point where each agent becomes robust against changes of the other agent. The performance of the new method was demonstrated on CIFAR10.\\n\\nThe paper is definitely interesting. If this method works as well as the authors claim, it can significantly improve the practicality of GAN. The paper is very readable and understandable but many small typos and grammar errors can be found in the text. This can be easily corrected by the authors.\\n\\nHowever, the contribution of this paper is questionable. The original CGD paper already applies it to train a GAN.\\n\\nI would also appreciate if the method is tested on multiple other data sets. \\n\\nOverall, the paper is well-written, technically correct and interesting enough for the venue. However, as I pointed out above, the contribution should be more clearly stated.\"}"
]
} |
HJgpugrKPS | Scale-Equivariant Steerable Networks | [
"Ivan Sosnovik",
"Michał Szmaja",
"Arnold Smeulders"
] | The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on the MNIST-scale dataset and on the STL-10 dataset in the supervised learning setting. | [
"Scale Equivariance",
"Steerable Filters"
] | Accept (Poster) | https://openreview.net/pdf?id=HJgpugrKPS | https://openreview.net/forum?id=HJgpugrKPS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"h1l5jl__g",
"HJeiPFvjjS",
"HylDlFPosB",
"H1xLPrvooS",
"B1xNrEPsjH",
"SyeNvGwsoS",
"r1g06sdZir",
"SkgxHUDycS",
"H1g-eN-3KS",
"BJgeyM2iYH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748567,
1573775714659,
1573775598850,
1573774686088,
1573774396092,
1573773916288,
1573125061711,
1571939895903,
1571718121188,
1571697111660
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2415/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2415/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2415/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2415/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2415/Authors"
],
[
"~K_V_Subrahmanyam1"
],
[
"ICLR.cc/2020/Conference/Paper2415/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2415/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2415/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"This work presents a theory for building scale-equivariant CNNs with steerable filters. The proposed method is compared with some of the related techniques . SOTA is achieved on MNIST-scale dataset and gains on STL-10 is demonstrated. The reviewers had some concern related to the method, clarity, and comparison with related works. The authors have successfully addressed most of these concerns. Overall, the reviewers are positive about this work and appreciate the generality of the presented theory and its good empirical performance. All the reviewers recommend accept.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reply to Review #1. Part 2\", \"comment\": \"Q: (3.1) In Figure 2, how many layers does the network have which was used to construct the middle plot?\", \"a\": \"The scales and therefore $\\\\sigma$ are the hyperparameters of the proposed method. The set of values we chose from is tailored by the requirement of the completeness of the obtained basis on the smallest scale when it is projected to the pixel grid.\\n\\nFor MNIST-scale experiment we used 4 scales with a step of $q=(10/3)^{1/3} \\\\approx 1.49$. We generate filters for $\\\\sigma=1.5, 1.5 q, 1.5 q^2, 1.5 q^3$ and store them in an array of spatial extent of $V=13$. We choose $q$ by relying on prior knowledge about the dataset. And the value of 1.5 is chosen from a set of $[1.1 - 1.7]$ with a step of 0.1 by using cross-validation. We choose the value which gives the best accuracy on the validation set. The variation of the accuracy during validation is of 0.1% on the scale of about 2%.\\n\\nFor STL-10 experiment, we sample 3 bases for $\\\\sigma = 0.9, 0.9 \\\\sqrt{2}, 1.8$ and store them in an array of spatial extent of $V= 7$. We chose the maximum number of scales we are able to use on our hardware. Here we use value of $0.9$ as it generated the complete basis on the smallest scale. And the value of $\\\\sqrt{2}$ is motivated by the assumption that in natural images of cats, cars, horses, etc. the scale variations are usually of factor 2. We did not run cross validation on this dataset.\", \"q\": \"(3.2) It would have been useful to include a study of the effect of the range and resolution of the scale space.\"}",
"{\"title\": \"Reply to Review #1. Part 1\", \"comment\": \"Thank you for your review.\", \"q\": \"(2.5) It was not immediately obvious how $\\\\psi(s, t)$ was related to $\\\\psi(x)$.\", \"a\": \"We modified it in the text to make it easier for understanding. $\\\\psi(x) = \\\\psi(s=1, t=x)$\"}",
"{\"title\": \"On the related paper\", \"comment\": \"Thank you for this useful reference. We added it to our paper.\"}",
"{\"title\": \"Reply to Review #3\", \"comment\": \"Thank you for your review.\\n\\nWe compared our results on STL-10 to the current state-of-the-art model in the supervised learning setting known as Harm-WRN. Our model SESN-B ourperformes it by more than 1% and achieves new state-of-the-art result on this dataset in the supervised learning setting. We include Harm-WRN in Table 3 for comparison.\", \"q\": \"Which scales were chosen for the fixed basis? How large in spatial extent are the kernels in the basis elements, at each scale? In the implementation, what is the value of V?\", \"a\": \"The scales and therefore $\\\\sigma$ are the hyperparameters of the proposed method. The set of values we choose from is tailored by the requirement of the completeness of the obtained basis on the smallest scale when it is projected to the pixel grid.\\n\\nFor MNIST-scale experiment we used 4 scales with a step of $q=(10/3)^{1/3} \\\\approx 1.49$. We generate filters for $\\\\sigma=1.5, 1.5 q, 1.5 q^2, 1.5 q^3$ and store them in an array of spatial extent of $V=13$. We choose $q$ by relying on prior knowledge about the dataset. And the value of 1.5 is chosen from a set of $[1.1 - 1.7]$ with a step of 0.1 by using cross-validation. We choose the value which gives the best accuracy on the validation set. The variation of the accuracy during cross-validation is of 0.1% on the scale of about 2%.\\n\\nFor STL-10 experiment, we sample 3 bases for $\\\\sigma = 0.9, 0.9 \\\\sqrt{2}, 1.8$ and store them in an array of spatial extent of $V= 7$. We chose the maximum number of scales we are able to use on our hardware. Here we use value of $0.9$ as it generated the complete basis on the smallest scale. And the value of $\\\\sqrt{2}$ is motivated by the assumption that in natural images of cats, cars, horses, etc. the scale variations are usually of factor 2. We did not run cross validation on this dataset.\"}",
"{\"title\": \"Reply to Review #2\", \"comment\": \"Thank you for your review.\", \"q\": \"Would you care to make a comparison between these two manuscripts?\", \"a\": \"In \\u201cScale-Equivariant Neural Networks with Decomposed Convolutional Filters\\u201d the authors propose scale-translation equivariant convolutional layers which are similar to what we propose. In addition, we propose the maximum scale projection which transforms the functions on scale-translations to the functions on just translations. In ScDCFNet paper this step is left unspecified. This difference has consequences for the experimental outcomes.\"}",
"{\"title\": \"A recent reference which also deals with scale.\", \"comment\": \"The following reference also deals with scale invariance using coupled-autoencoders. The approach is entirely different, but the idea is to deal with scales under rotations.\\n\\nSO(2)-equivariance in Neural networks using tensor nonlinearity, Muthuvel Murugan, K V Subrahmanyam, BMVC 2019.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a framework (SESN) for learning deep networks that possess scale equivariance in addition to translation invariance. The formulation is based on group convolution on the scale-translation group. Filters are represented as the coefficients of a set of continuous basis functions, which are sampled (once) at a discrete set of scales. The theoretical formulatioin is clear and interesting. The approach is evaluated in terms of image classification accuracy. The set of baselines is quite exhaustive, including recent papers and papers that are not widely-known.\\n\\nThe most significant improvement for the STL-10 dataset was obtained by the SESN-B variant. This is interesting, because it applies the same operation independently at multiple scales and periodically performs global pooling over scale.\\n\\nThe effectiveness of the approach was demonstrated in the low-data regime, where the inductive bias of scale equivariance is more likely to help.\\n\\nOverall I found the paper to be thought-provoking and well-executed. There are a number of questions that I would still like to see investigated, but nevertheless I feel that this paper already represents a worthwhile contribution.\", \"most_important_issues_and_questions\": \"(1.1) The SESN-B architecture resembles quite closely the SI-ConvNet architecture of Kanazawa et al. (except that that paper resized the images instead of the filters). While your approach may be more computationally efficient, it's not clear what leads to the improvement in accuracy here? Can you explain the difference?\\n\\n(1.2) I would have preferred to see the approach demonstrated on a task which possesses scale equivariance, such as semantic segmentation.\\n\\n(1.3) To argue in favour of the continuous basis, it would have been more convincing to compare against directly learning the filters at the highest resolution and obtaining the other filters by downsampling. This would not represent a runtime cost during inference.\\n\\n(1.4a) It seems that SESN-C should contain SESN-A as a special case. However, SESN-C is worse than SESN-A. Do you have any idea whether this is due to optimization difficulty or over-fitting? Could you compare the training objectives?\\n(1.4b) It is stated that the scale equivariance of SESN-C is worse than SESN-A and -B. However, it should still be scale equivariant, except for boundary effects in scale? What is the parameter N_S in this experiment compared to the number of scales S? And the same question for the plot on the right in Figure 2.\\n(1.4c) How does SESN-C have the same number of parameters as SESN-A and SESN-B? I thought that more parameters would be required to compute interscale interaction. Was the number of channels reduced?\", \"issues_with_clarity\": \"(2.1) The explanation of equation 10 is not clear. In particular, the diagonal structure in Figure 1 is not stated anywhere in the text, it is simply explained as an expansion from [C_out, C_in, S, V, V] to [C_out S, C_in S, V, V].\\n\\n(2.2) It's not immediately apparent how multiple applications of convHH are used to provide interscale interaction. I assume it is achieved by shifting f or psi in the scale dimension for each application of convHH, or equivalently by modifying the base-scale in the basis?\\n\\n(2.3) The explanation of \\\"scalar\\\" and \\\"vector\\\" variants in the experimental section was not perfectly clear. It is stated that \\\"all the layers have now scalar input instead of vector input.\\\" However, I understood that the max-reduction was only over the scale dimension, not the channel dimension, so that the inputs are still vectors? This is confusing as a reader.\\n\\n(2.4) The expansion of the filters to a diagonal structure is described in the \\\"implementation\\\" section. However, it seems that this would entail wasteful multiplications by zero. Nevertheless, SESN is shown to be highly efficient in the appendix. Do you avoid these pointless operations in the actual implementation?\\n\\n(2.5) It was not immediately obvious how $\\\\psi_{\\\\sigma}(s, t)$ was related to $\\\\psi_{\\\\sigma}(x)$.\", \"other_details\": \"(3.1) In Figure 2, how many layers does the network have which was used to construct the middle plot?\\n\\n(3.2) It would have been useful to include a study of the effect of the range and resolution of the scale space.\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper describes a method for integrating scale equivariance into convolutional networks using steerable filters. After developing the theory using continuous scale and translation space, a discretized implementation using a fixed set of steerable basis elements is described. Experiments are performed measuring the error from true equivariance, varying number of layers, image scale and scales in scale interactions. The method is evaluated using MNIST-scale and STL-10, with convincing results on MNIST-scale and bit less convincing but still good results on STL-10.\\n\\nOverall, I think this is a nice paper with generally good explanations and experiments probing the behavior. I would have liked to see more probing into the effects of number and distance between scales. Table 1 and corresponding text say that a significant advantage of the approach is that it can handle arbitrary scale values, but there was no explicit exploration of the effects of using this beyond one set of scales per experiment/dataset. What scale values can be sampled, which work best, and why?\\n\\nAlso, while the MNIST-scale experiment seems convincing, I think the STL-10 is a bit less (but still OK): Although the method outperforms other methods and appropriate baseline models, it's a little disappointing that pooling over scales (which I would would convert the equivariance to invariance) is best, and inter-scale interactions increase error. (Perhaps this is not too surprising in retrospect, as images may have limited scale variation from camera position in this dataset, but significant within-class viewpoint variation.)\\n\\nEven so, I still find the method concise and of interest, with the basics evaluated, even if some of its unique advantages may have been better explored.\", \"additional_questions\": [\"Inter-scale interaction could be elaborated a bit more. End of sec 4 says, \\\"use convHH for each scale sequentially and .. sum\\\". I believe this is sequencing over scales in the kernel; explaining a bit better how this is implemented, including the shape of w in this case, would be helpful.\", \"Which scales were chosen for the fixed basis? How large in spatial extent are the kernels in the basis elements, at each scale?\", \"In the implementation, what is the value of V (sampled 2d conv kernel size)?\"]}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed scale-equivariant steerable convolutional neural networks that is able to preserve both the translation and scaling symmetry of the data in the representation. To achieve this, the authors developed the scale-convolution blocks in the network, and generalized other common blocks, such as pooling and nonlinearity, to remain scale-equivariant. Extensive experiments have been conducted to show that the proposed scale-equivariant network\\n(a) is indeed scale-equivariant even with numerical discretization\\n(b) achieves better classification performance when compared to non-scale-equivariant networks as well as previously proposed locally-scale invariant networks.\\n\\nOverall, this is a very good paper. The paper is well-written and well-organized. The newly proposed scale-convolution is the most general way of achieving scale-equivariant representations. Experiments are convincing and justifies the usage of the proposed architecture in dealing with multi-scale inputs.\", \"one_question_to_ask_that_does_not_effect_my_rating\": \"\", \"there_is_a_very_similar_paper_submitted_to_this_conference\": \"\", \"https\": \"//openreview.net/forum?id=rkgCJ64tDB\\nWould you care to make a comparison between these two manuscript?\"}"
]
} |
B1e3OlStPB | DeepSphere: a graph-based spherical CNN | [
"Michaël Defferrard",
"Martino Milani",
"Frédérick Gusset",
"Nathanaël Perraudin"
] | Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. Our code is available at https://github.com/deepsphere. | [
"spherical cnns",
"graph neural networks",
"geometric deep learning"
] | Accept (Spotlight) | https://openreview.net/pdf?id=B1e3OlStPB | https://openreview.net/forum?id=B1e3OlStPB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"KWM6uzvTUb",
"HJxzpfinsr",
"r1xyJ3DDjS",
"Sye2UiPDjS",
"SJxXi9wPoS",
"BkeilpU-sB",
"Bkx5kg6RKH",
"BkgixEgAKr",
"r1xdde8nKH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748537,
1573855929579,
1573514198993,
1573514067949,
1573513882763,
1573117170957,
1571897314416,
1571845107258,
1571737712063
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2413/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2413/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2413/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2413/Authors"
],
[
"~Jialin_Liu3"
],
[
"ICLR.cc/2020/Conference/Paper2413/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2413/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2413/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a novel methodology for applying convolutional networks to spherical data through a graph-based discretization. The reviewers all found the methodology sensible and the experiments convincing. A common concern of the reviewers was the amount of novelty in the approach, as in it involves the combination of established methods, but ultimately they found that the empirical performance compared to baselines outweighed this.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"New revision\", \"comment\": \"We uploaded an improved manuscript thanks to the reviewers' comments.\\n\\nThe main update is the addition of theorem 3.2 that formalizes the relation between theorem 3.1 and rotation equivariance. Small changes across the text have been made to clarify the exposition further.\\n\\nA link to a public git repository containing all the code will be added after the blind-review process.\"}",
"{\"title\": \"Answer to Official Blind Review #1\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\n\\nWe prepared a revised manuscript, to be uploaded shortly, containing a deeper theoretical discussion closing the gap between Theorem 3.1 and rotation equivariance. In a small proposition, to be added after theorem 3.1, we precise mathematically the relationship between these two concepts. In short, we proved that, if theorem 3.1 holds, our graph Laplacian L commutes with any rotation operator R in the limit of infinite sampling (pointwise), i.e., |LRf(x) - RLf(x)| -> 0, thus answering the reviewer's concerns about this subject.\"}",
"{\"title\": \"Answer to Official Blind Review #3\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\n\\nWhile novelty might be limited (although we'd argue that designing a good graph is non-trivial, if only by checking how many papers have been written on the convergence / consistency of discrete Laplacians), potential impact is certainly not. Researchers working with large spherical maps, in multiple fields, will benefit from the possibility to tackle their problems with a neural network.\\n\\nWhich other baselines would you like to see? We compared with previous works that tackled the same tasks. It is difficult (and probably unfair) to adapt baselines not designed to solve those tasks.\"}",
"{\"title\": \"Answer to Official Blind Review #2\", \"comment\": \"We thank the reviewer for their time assessing our work and their constructive feedback.\\n\\nWe deliberately excluded experiments on omnidirectional imagery. In our opinion, those don't possess full spherical symmetries as gravity is orienting the objects. We encourage the reviewer to check the work of Khasanova and Frossard, who explicitly designed graph-based spherical CNNs for omnidirectional imaging. In [1], they designed a graph that yields an equivariant convolution to a subgroup of SO(3). Longitudinal rotations are equivariant by construction of the equiangular sampling, and they optimized the graph for latitudinal equivariance. Their scheme is presented in section 3.2 of our paper. While their convolution is not equivariant to the whole of SO(3), that is not an issue for this application as gravity prevents objects from rotating around the third axis. It may even be beneficial. Moreover, the orientation given by gravity allows to factorize the spherical graph and design anisotropic filters [2].\\n\\nRadius or kNN graphs are means to get a sparse graph for O(n) matrix multiplication, instead of O(n\\u00b2) for the full distance-based similarity graph. We believe that the choice of one or the other doesn't really matter. Sparsification can be seen as a numerical approximation that replaces small values by zeros. The kNN scheme is often preferred in practice as the choice of k is directly linked to the computational cost, while the choice of a radius large enough to avoid disconnected vertices might include many more edges than necessary on denser areas.\\n\\nThanks for pointing out an unclear statement about the dispersion of the sampling sequence. d_i should be understood as the largest distance between the center x_i and any point on the surface sigma_i. Hence, we define d_i to be the radius of the smallest ball centered at x_i containing sigma_i. We'll clarify.\\n\\nFrom the following two sentences, we don't understand what could be improved.\\n* \\\"The theoretical analysis and discussion of sampling is interesting, though should be more clearly stated throughout and potentially visualized in figures.\\\"\\n* \\\"A figure detailing the parameters and setup for theorem 3.1 and figure 2 would be useful.\\\"\\nWe would be glad if the reviewer could elaborate.\\n\\nA revised manuscript will be uploaded shortly.\\n\\n[1] Renata Khasanova and Pascal Frossard. Graph-based classification of omnidirectional images. In Proceedings of the IEEE International Conference on Computer Vision, 2017.\\n[2] Renata Khasanova and Pascal Frossard. Geometry Aware Convolutional Filters for Omnidirectional Images Representation. In International Conference on Machine Learning. 2019.\"}",
"{\"title\": \"Two Questions about Rotation Equivariance\", \"comment\": \"Really interesting work.\\nI've got 2 questions:\\n1, As for efficiency, why not use GCN proposed by T.K. Kipf, the successor of ChebNet?\\n2. As for rotation equivariance, CCN states that \\\"It is, however, possible to construct a CNN in which the activations transform in a predictable and reversible way,\\\" I understand what is reversible(invertible) in this work is what CCN calls activation, what is reversible in CCN is the rotation operator in this work, are they same?\\n\\nThanks.\\n\\nRef.\\n1. Semi-Supervised Classification with Graph Convolutional Networks. ICLR'17.\\n2. Covariant Compositional Networks For Learning Graphs. ICLR'18\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents DeepSphere, a method for learning over spherical data via a graphical representation and graph-convolutions. The primary goal is to develop a method that encodes equivariance to rotations, cheaply. The graph is formed by sampling the surface of the sphere and connecting neighbors according to a distance-based similarity measure. The equivariance of the representation is demonstrated empirically and theoretical background on its convergence properties are shown. DeepSphere is then demonstrated on several problems as well as shown how it applies to non-uniform data.\\n\\nThe paper is interesting and clear. The projection of structured data to graphical representations is both efficient in utilizing existing algorithmic techniques for graph convolutions and useful for approaching the spherical structure of the data. The theoretical analysis and discussion of sampling is interesting, though should be more clearly stated throughout and potentially visualized in figures.\\n\\nThe experiments performed are thorough and interesting. The approach both outperforms baselines in inference time and accuracy. However, one wonders the performance on the well-researched tasks such as the performance on 3D imagery, e.g., Su & Grauman, 2017; Coors et al., 2018. \\n\\nThe unevenly sampled data is a nice extension showing the generality of the approach. How does the approach work for data connected within a radius rather than a k-nearest approach?\", \"minor\": [\"A figure detailing the parameters and setup for theorem 3.1 and figure 2 would be useful.\", \"The statement on the dispersion of the sampling sequence states \\u201cthe smallest ball in \\\\R^3 containing \\\\sigma_i\\u201d, but I believe it should be \\u201ccontaining only \\\\sigma_i\\u201d.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the problem of designing a convolution for a spherical neural network. The authors use the existing graph CNN formulation and a pooling strategy that exploits hierarchical pixelations of the sphere to learn from the discretized sphere. The main idea is to model the discretized sphere as a graph of connected pixels: the length of the shortest path between two pixels is an approximation of the geodesic distance between them. To show the computational efficiency, sampling flexibility and rotation equivariance, extensive experiments are conducted, including 3D object recognition, cosmological mode classification, climate event segmentation and uneven sampling.\", \"pros\": \"1.\\u00a0The application and combination of different techniques in this paper are smart.\\n2. The experiments show that the proposed method outperforms other baseline methods.\\n3. The paper is well organized and written.\", \"cons\": \"1. It is a good application of known techniques, but the novelty is limited.\\n2. It is suggested to add more baselines in the experiments.\\n\\n[1]\\u00a0Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information ProcessingSystems, 2016\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, CNNs specialized for spherical data are studied. The proposed architecture is a combination of existing frameworks based on the discretization of a sphere as a graph. As a main result, the paper shows a convergence result, which is related to the rotation equivalence on a sphere. The experiments show the proposed model achieves a good tradeoff between the prediction performance and the computational cost.\\n\\nAlthough the theoretical result is not strong enough, the empirical results show the proposed approach is promising. Therefore I vote for acceptance. \\n\\nThe paper is overall clearly written. It is nice that the authors try to mitigate from overclaiming of the analysis. \\n\\nAs a non-expert of spherical CNN, I don't understand clearly the gap between the result Theorem 3.1 and showing the rotation equivalence. It would be nice to add some counterexample (i.e., in what situation the proposed approach does not have rotational equivalence while Theorem 3.1 holds).\"}"
]
} |
rke3OxSKwr | Improved Training Techniques for Online Neural Machine Translation | [
"Maha Elbayad",
"Laurent Besacier",
"Jakob Verbeek"
] | Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start the decoding process before the entire input is available, e.g. to reduce the latency in automatic speech recognition. We consider state-of-the-art wait-k decoders, that first read k tokens from the source and then alternate between reading tokens from the input and writing to the output. We investigate the sensitivity of such models to the value of k that is used during training and when deploying the model, and the effect of updating the hidden states in transformer models as new source tokens are read. We experiment with German-English translation on the IWSLT14 dataset and the larger WMT15 dataset. Our results significantly improve over earlier state-of-the-art results for German-English translation on the WMT15 dataset across different latency levels. | [
"Deep learning",
"natural language processing",
"Machine translation"
] | Reject | https://openreview.net/pdf?id=rke3OxSKwr | https://openreview.net/forum?id=rke3OxSKwr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"BTg6x3DRJE",
"BylTPwWioS",
"rJxSt6UWsr",
"r1xU1p8bsB",
"SyxQenqMcH",
"B1xFB-O0FB",
"HJgHncMCtB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748508,
1573750629287,
1573117308986,
1573117150466,
1572150251423,
1571877184774,
1571855020734
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2412/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2412/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2412/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2412/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2412/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2412/AnonReviewer2"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper proposes a method of training latency-limited (wait-k) decoders for online machine translation. The authors investigate the impact of the value of k, and of recalculating the transformer's decoder hidden states when a new source token arrives. They significantly improve over state-of-the-art results for German-English translation on the WMT15 dataset, however there is limited novelty wrt previous approaches. The authors responded in depth to reviews and updated the paper with improvements, for which there was no reviewer response. The paper presents interesting results but IMO the approach is not novel enough to justify acceptance at ICLR.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Paper updates\", \"comment\": \"Based on the suggestions of the reviewers, we made a few updates to the paper:\\nMade the comparison to the original wait-k paper [1] clearer, highlighting the differences in the encoder side.\\nAdded training time details of our models as compared to the baselines and our implementation of STACL [1].\\nAdded decoding speeds on GPU and CPU with and without decoder states update as well as the decoding speed of our implementation of [1].\\n\\n[1] Ma et al. \\u201cSTACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework.\\\" ACL 2019\"}",
"{\"title\": \"Thank you for the review and comments\", \"comment\": \"Regarding \\\"The masking and using causal attention for the transformer has been proposed in previous works.\\\"\\n\\nUni-directional encoders for online machine translation were previously used with RNN-based architectures, we are not familiar with existing work that uses causal attention for transformer NMT models. In [1] the encoder was not causal which means that for every time-step the encoder states have to be updated. It would be great if you could share the references you were thinking about.\\n\\n\\nRegarding \\\"The hidden state updates provide some gains for the model but also makes the decoder more expensive.\\\"\\n\\nCompared to our model with caching, we agree that the update of the decoder states is expensive. However if we\\u2019re comparing to [1] we basically re-allocated the cost from updating the encoder states to updating the decoder states instead and we get better performances with this new allocation.\\n\\nRegarding \\\"The training with multiple k provides similar gain as training with one k larger than the value used at the inference time. Overall the contributions are limited.\\\"\\n\\nTraining with a single large k does not improve the performance on smaller values of k.\\nIf we look for example at figure 4.c, for an average lagging of 2, there is a difference of almost 3 BLEU points between the model trained with k=9 and the one trained with k in [1,...,5].\\n\\nRegarding \\\"There is quite some room for this paper to improve its clarify, especially in terms of annotations and explaining the proposed ideas.\\\"\\n\\nPlease let us know if there are any specific annotations or concepts you think need rewriting, we would gladly make it clearer in the updated paper.\\n\\n[1] Ma et al. \\u201cSTACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework.\\\" ACL 2019\"}",
"{\"title\": \"Thank you for the review and comments!\", \"comment\": \"Regarding \\\"1) in section 3.1.2, the authors mentioned two adaptations. Is the proposed AR encoder uni-directional? If the AR encoder is uni-directional, then I would be surprised that the uni-directional encoder outperforms the bi-directional encoder in the original wait-k model. \\\"\\n\\nThe original wait-k paper [2] and the subsequent paper [1,3] use an \\u2018incremental encoder\\u2019. It is bidirectional for all tokens before a cursor g(t). Therefore at every decoding time-step t, with increased g(t) the encoder states have to be updated. It is interesting, indeed perhaps surprising, to see that in the context of online machine translation the unidirectional encoder outperforms the bidirectional one. All of our transformer model for online translation use uni-directional encoders (MT) and when evaluated without the \\u2018update\\u2019, the decoders are equivalent to the ones in [2] which suggests that the uni-directional encoders are better suited for online translation.\\n\\nRegarding \\\"For the second bullet, I think the original wait-k also did the same thing(they mentioned this in the paper clearly). So there is nothing new about bullet 2. \\\"\\n\\nIndeed, there is nothing new about the masking in the encoder-decoder interaction but we wanted to explicitly define all the masks used in the architecture, and list the differences with respect to an offline transformer model.\\n\\nRegarding \\\"2) the idea mentioned in fig 1 is very similar to [1]. I suggest the authors compare with the aforementioned methods. \\\"\\n\\nWe will compare with the method in [1]. However In [1] the authors suggest optimizing the decoding along a set of paths sampled within an area of interest (similar to the gray area in our figure 2.) but ended up optimizing along the two boundary paths. What we suggest here is to optimize the decoding in \\u2018all\\u2019 the cells of the gray area. This is possible with the pervasive attention architecture where the cell state is independent from the path we followed to arrive there. With the transformer model, optimizing the full area is not evident, so we ended up selecting a few wait-k paths and thus using training strategy similar to the one in [1]. There is also the fact that [1] aims to learn dynamic read/write decision with a special token added to the vocabulary to represent the \\u2018Read\\u2019 action. Unfortunately, [1] only reports results for Chine-English (and reverse) translation, preventing direct comparison to our results and those of [2,3].\\n\\nRegarding \\\"3) updating the hidden state of the decoder introduces more complexity during the inference time. I recommend the authors to perform some analysis about decoding time with CPU and GPU.\\\"\\n\\nWe will include decoding times on CPU and GPU for our models and compare them to the approach in [1].\\n\\nRegarding \\\"4) it is also interesting to show more comparison between different models' training time with the original STACL. \\\"\\n\\nDuring the training of our models on a given wait-k path we do not update the encoder states nor the previous decoder states. This makes our training time comparable to an offline transformer. With [1] however, given a target sequence y of length |y| there are |y| encoder forward passes to evaluate the states associated with each context size g(t).\\n\\nBetween our single k training and multiple k, the training time is higher if for each sentence pair we run a separate forwards pass for each value of k. Alternatively, for each batch of sentence pairs we can sample a value of k and only use the loss for the wait-k path for that value of k for that batch. This way we end up with a comparable training times.\\n\\n[1] Zheng et al. \\\"Simultaneous Translation with Flexible Policy via Restricted Imitation Learning\\\" ACL 2019\\n[2] Ma et al. \\u201cSTACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework.\\\" ACL 2019\\n[3] Zheng et al. \\u201cSimpler and faster learning of adaptive policies for simultaneous translation.\\\" EMNLP 2019\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Sorry, this is a very quick review.\\n\\nThe paper is about an improved method of training latency-limited (wait-k) decoders for transformer-based machine translation, in which the right context is limited to various numbers. So it's a kind of augmentation method that's well matched to the test scenario. At least, that is my understanding.\\n\\nI am not really an MT expert so cannot comment with much authority. On the plus side the paper says it sets a new state of the art for latency-limited decoding for a German-English MT task, and it involves transformers, which are quite hot right now so the attendees might find it interesting because of that connection.\\nOn the minus side, it is all really quite task-specific.\\nI am putting weak accept.. regular-strength accept might be my other choice.\\nIt's all with low confidence.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper extends the idea of prefix-to-prefix in STACL and proposes two different variations. The authors did some interesting experiments between caching and updating decoder.\", \"my_questions_are_as_follows\": \"1) in section 3.1.2, the authors mentioned two adaptations. Is the proposed AR encoder uni-directional? If the AR encoder is uni-directional, then I would be surprised that the uni-directional encoder outperforms the bi-directional encoder in the original wait-k model. For the second bullet, I think the original wait-k also did the same thing(they mentioned this in the paper clearly). So there is nothing new about bullet 2. \\n\\n2) the idea mentioned in fig 1 is very similar to [1]. I suggest the authors compare with the aforementioned methods. \\n\\n3) updating the hidden state of the decoder introduces more complexity during the inference time. I recommend the authors to perform some analysis about decoding time with CPU and GPU.\\n\\n4) it is also interesting to show more comparison between different models' training time with the original STACL. \\n\\n[1] Zheng et al. \\\"Simultaneous Translation with Flexible Policy via Restricted Imitation Learning\\\" ACL 2019\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work apply the wait-k decoding policy on the 2D CNN-based architecture and transformer. In the transformer-based model the author proposed to recalculate the decoder hidden states when a new source token arrives. The author also suggested to train with multiple k at the decoder level with shared encoder output. The experiments showed that the transformer model provide the best quality on IWSLT14 En-De, De-En, and WMT15 De-EN.\\n\\nThe masking and using causal attention for the transformer has been proposed in previous works. The hidden state updates provide some gains for the model but also makes the decoder more expensive. The training with multiple k provides similar gain as training with one k larger than the value used at the inference time. Overall the contributions are limited.\\n\\nThere is quite some room for this paper to improve its clarify, especially in terms of annotations and explaining the proposed ideas.\"}"
]
} |
BJes_xStwS | GRASPEL: GRAPH SPECTRAL LEARNING AT SCALE | [
"Yongyu Wang",
"Zhiqiang Zhao",
"Zhuo Feng"
] | Learning meaningful graphs from data plays important roles in many data mining and machine learning tasks, such as data representation and analysis, dimension reduction, data clustering, and visualization, etc. In this work, we present a scalable spectral approach to graph learning from data. By limiting the precision matrix to be a graph Laplacian, our approach aims to estimate ultra-sparse weighted graphs and has a clear connection with the prior graphical Lasso method. By interleaving nearly-linear time spectral graph sparsification, coarsening and embedding procedures, ultra-sparse yet spectrally-stable graphs can be iteratively constructed in a highly-scalable manner. Compared with prior graph learning approaches that do not scale to large problems, our approach is highly-scalable for constructing graphs that can immediately lead to substantially improved computing efficiency and solution quality for a variety of data mining and machine learning applications, such as spectral clustering (SC), and t-Distributed Stochastic Neighbor Embedding (t-SNE). | [
"Spectral graph theory",
"graph learning",
"data clustering",
"t-SNE visualization"
] | Reject | https://openreview.net/pdf?id=BJes_xStwS | https://openreview.net/forum?id=BJes_xStwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"3ItgKii3ng",
"63MySLlyvP",
"Bkx5ok9sjr",
"ryxCxkcjiB",
"Skg4aRtssH",
"r1l8rAKooH",
"HJx9csHQ5H",
"rkeVRxoCFB",
"S1e5fAq6tS"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1580507445419,
1576798748478,
1573785505681,
1573785333678,
1573785275595,
1573785150290,
1572195217764,
1571889356054,
1571823121605
],
"note_signatures": [
[
"~Stacy_X_Pu1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2411/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2411/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2411/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2411/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2411/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2411/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2411/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"To Reviewer2: What are the state-of-the-art methods? Could you cite the paper here?\", \"comment\": \"Hi Reviewer2,\\n\\nI would like to ask what research papers, in your opinion, addressing the same problem (learning graphs at scale)? Could you name some papers with the state-of-the-art methods here? \\n\\nMany Thanks\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper proposes a scalable approach for graph learning from data. The reviewers think the approach appears heuristic and it is not clear the algorithm is optimizing the proposed sparse graph recovery objective.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Q1. The authors demonstrate that their algorithm is scalable and faster than Laplacian-based methods requiring O(N^2). However, the proposed method also requires to compute eigenvectors of Laplacian thus it seems not to be faster compared to the previous algorithm. It would be better to provide the time complexity of each step in section 3.2 and that of the overall algorithm.\", \"a1\": \"Thanks for pointing this out. Our spectral graph embedding leverages a recent spectral graph coarsening approach to achieve nearly-linear time complexity for computing the first few graph Laplacian eigenvalues and eigenvectors. We note that this is the first work that introduces a spectral method for scalable graph learning from data by leveraging the latest results in spectral graph theory. Since in the proposed work all the kernel functions, such as spectral graph sparsification, spectral graph coarsening and spectral graph embedding methods are nearly-linear time algorithms, the entire spectral graph learning approach is also highly scalable. We have included more results to show the scalability of our method and stressed the above fact in the revised draft.\", \"q2\": \"It is unclear that the proposed algorithm (section 3.2) is optimized for the objective function in equation (9). And it is possible to theoretically guarantee that the algorithm finds a spectrally optimized graph?\", \"a2\": \"This is a very good suggestion. In the revised paper, we have included a description of the connection between our algorithm and the optimization objective in (2). The original optimization objective function (9) includes three components: (a) log (det L) that corresponds to the sum of the Laplacian eigenvalues, (b) - \\\\alpha* X^T L X that corresponds to the smoothness of signals across the graph, and (c) - \\\\beta* |L|_0 that corresponds to graph sparsity. Our algorithm flow aims to iteratively identify and include the most spectrally-critical edges into the latest graph so that the first few Laplacian eigenvalues & eigenvectors can be most significantly perturbed with the minimum amount of edges. Since the inclusion of spectrally-critical edges will immediately improve distortion in the embedding space, the overall smoothness of graph signals will thus be significantly improved. In other words, the spectrally-critical edges will only impact the first few Laplacian eigenvalues and eigenvectors key to graph spectral properties, but not the largest few eigenvalues and eigenvectors-which will require adding much more edges to influence. It can be easily shown that including any additional edge into the graph will monotonically increase (a), but monotonically decrease (b) and (c). Specifically, when the spectra of the learned graph is not stable, adding spectrally-critical edges will dramatically increase (a), while decreasing (b) and (c) at a much lower rate since the improved graph signal smoothness will only result in a slight change (increase) to Tr(X^T L x). Consequently, the objective function in (2) will be effectively maximized by including only a small amount of spectrally-critical edges until the first few eigenvalues become sufficiently stable; when adding extra edges can no longer significantly perturb the first few eigenvalues, (b) and (c) will start to dominate the objective function value, indicating that the iterations should be terminated. The stopping condition can be controlled by properly setting an embedding distortion threshold for $\\\\eta $ or parameters $\\\\alpha$ and $\\\\beta$. We have included the above convergence analysis in the revised draft.\", \"q3\": \"For experiments, although the authors argue that the proposed algorithm is scalable, datasets that they used are not large-scale. And it is needed to provide runtimes of other algorithms for graph recovery tasks (section 4.2).\", \"a3\": \"Thanks for the suggestion. We have compared our methods with state-of-the-art graph learning methods published in ICLR\\u201919 paper \\\"Large scale graph learning from smooth signals.\\\" by Kalofolias, Vassilis, and Nathana\\u00ebl Perraudin. As shown, our approach is over 400X faster for graph construction while achieving consistently much better accuracy in spectral clustering tasks. We also added a figure (Figure 3) showing the scalabilities of comparisons with state-of-the-art methods for graph recovery tasks.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Q1. My major concern is about the experiments. The authors claim that the proposed graph learning approach is highly scalable. It would be more convincing if the authors can evaluate the proposed method on larger datasets.\", \"a1\": \"Thanks for the suggestion. We have compared our methods with state-of-the-art graph learning methods published in ICLR\\u201919 paper \\\"Large scale graph learning from smooth signals.\\\" by Kalofolias, Vassilis, and Nathana\\u00ebl Perraudin. As shown, our approach is over 400X faster for graph learning while achieving consistently much better accuracy in spectral clustering tasks. We also added Figure 3 to show the runtime comparisons with state-of-the-art methods for graph recovery tasks.\\n \\nQ2. One of the tasks in experiments is t-SNE visualization. There are also some faster versions of t-SNE with a complexity of O(NlogN), such as [a]. For t-SNE, the authors may justify what's the advantage of using the proposed method over other fast t-SNE algorithms.\\n[a] Accelerating t-SNE using Tree-Based Algorithms, JMLR 2014.\", \"a2\": \"Thanks very much for the suggestion. Our results (standard t-SNE) reported in the paper are obtained by using the tree-based t-SNE algorithm that is a default option in Matlab. We have clarified this in the revised paper.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Q1. The studied problem has been widely investigated in the literature. Many methods have been proposed within the same objective, including taking care of the scalability issue. The authors fail to provide the state of the art, as well as describe the contributions with respect to previous work. As a consequence, the contributions are not clear. Maybe the proposed framework is original, but there has been plenty of methods that have considered the same problem.\", \"a1\": \"Thanks very much for pointing this out. We have added descriptions regarding our contribution to the abstract and introduction. Our contribution: this is the first work that introduces a spectral method for learning ultra-sparse (tree-like) graphs from data by leveraging the latest results in spectral graph theory, such as the nearly-linear-time spectral graph sparsification, spectral coarsening and spectral embedding techniques. Our framework is similar to the original graphical Lasso framework with the precision matrix replaced by a graph Laplacian matrix. This approach iteratively identifies and includes the most spectrally-critical edges into the latest graph, so that the first few Laplacian eigenvalues and eigenvectors can be most significantly perturbed by including the minimum amount of edges. The iterations will be terminated when the graph spectra become sufficiently stable (or graph signals become sufficiently smooth across the graph and lead to rather small Laplacian quadratic forms). High-quality estimation of attractive Gaussian Markov Random Fields (GMRFs) can be achieved for much larger datasets compared with state-of-the-art methods. The graphs learned from our approach allow obtaining much more accurate results more efficiently in spectral clustering tasks (due to the ultra-sparse tree-like structure) and faster performance for t-SNE visualization of large data sets. We have also included more details about the connection between our algorithm with the original optimization objective in (2) in the revised draft.\\n\\nQ2. Experiments are poor and not convincing. The authors compare the proposed method to only two spectral clustering methods, which as the standard kNN and the Consensus kNN from 2013. These two methods are pretty old and many more recent methods have been introduced in the literature. Moreover, the results in Table 1 are somehow misleading, as the standard kNN is faster that the proposed method on 3 out of 4 datasets. Experiments in graph recovery are not clear, starting from the fact that the datasets are not defined (what are the Gaussian graph and ER graph?), neither the experimental setting (what is the problem at hand?). The same goes to the application of t-SNE which is also very weak.\", \"a2\": \"Thanks very much for the kind suggestion. GRASPEL indeed runs slightly slower than the standard kNN for very small datasets but much faster for larger ones. More importantly, the graphs learned by our approach have ultra-sparse tree-like structures (the edge to node ratio is between 1.1 to 1.3) and will result in significantly improved accuracy and efficiency in spectral clustering. As shown in Table 1 that includes substantially updated results, the spectral clustering time for the MNIST data set with standard kNN is over 6000 seconds but will be dramatically brought down to less than three seconds (over 2000X speedup) using the graph learned by our method (GRASPEL). We also have compared our methods with state-of-the-art graph learning methods published in ICLR\\u201919, \\\"Large scale graph learning from smooth signals.\\\" by Kalofolias, Vassilis, and Nathana\\u00ebl Perraudin. As shown, our approach is over 400X faster for graph construction while achieving consistently much better accuracy in spectral clustering tasks. We also added Figure 3 to show the runtime comparisons with state-of-the-art methods for graph recovery tasks.\"}",
"{\"title\": \"Summary of our update\", \"comment\": \"Thanks for the comments from all three reviewers. We added additional experiments and clarification into our modified paper (marked in blue). Specifically, (1) we completely rewrote Sections 2 and 3, as well as modified Sections 1 and 4 to more clearly highlight our contribution and results; (2) convergence and complexity analysis has also been included into Section 3; (3) we added additional experimental results comparing with the ICLR\\u201919 paper (\\\"Large scale graph learning from smooth signals.\\\") and included additional runtime results for spectral clustering tasks using the graphs learned (constructed) by different methods; (4) we also demonstrated GRASPEL\\u2019s runtime scalability for graph recovery tasks in Figure 3 by comparing it with state-of-the-art methods on data sets of different sizes.\"}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors present a method that transforms data into graph. They emphasize on the fact that the proposed method is scalable, using a spectral embedding to construct the graph.\\n\\nWe think that the paper is not of enough quality to be accepted in ICLR. Without going in detail in the derivations, we give below some major issues in this submitted paper.\\n\\nThe studied problem has been widely investigated in the literature. Many methods have been proposed within the same objective, including taking care of the scalability issue. The authors fail to provide the state of the art, as well as describe the contributions with respect to previous work. As a consequence, the contributions are not clear. Maybe the proposed framework is original, but the there has been plenty of methods that have considered the same problem.\\n\\nExperiments are poor and not convincing. The authors compare the proposed method to only two spectral clustering methods, which as the standard kNN and the Consensus kNN from 2013. These two methods are pretty old and many more recent methods have been introduced in the literature. Moreover, the results in Table 1 are somehow misleading, as the standard kNN is faster that the proposed method on 3 out of 4 datasets. Experiments in graph recovery are not clear, starting from the fact that the datasets are not defined (what are the Gaussian graph and ER graph?), neither the experimental setting (what is the problem at hand?). The same goes to the application of t-SNE which is also very weak.\\n\\n--------------\\nReply to Rebuttal \\n\\nThe authors have modified the paper to take into consideration our previous comments and suggestions. However, we think that it is still of not sufficient quality. We give below some elements, without providing a thorough review.\\n\\nIt is pretty pretentious to say that \\\"this is the first work that introduces a spectral method for learning ultra-sparse (tree-like) graphs from data\\\", while not comparing to the state of the art. There have been many spectral methods in graph learning for large-scale datasets.\\n\\nIn experiments, the only added method is the one of Kalofolias and Perraudin (submitted in 2017 to ArXiv). However, results show that this method is the worst of all methods. It is even the worst compared to the simple standard knn. It is not clear how the authors get such results; It looks like something is wrong in experiments, or they are cherrypicking.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a scalable spectral approach for graph learning. In particular, the authors use graph Laplacian as precision matrix, and show the connection between the proposed method and graphical Lasso. Three tasks, including spectral clustering, graph recovery and t-SNE visualization, are considered in experiments.\\n\\nPros.\\n1. Scalable graph learning is an important research topic. This paper presents a practical solution to large-scale graph learning.\\n2. The connection between the proposed method and graphical Lasso is discussed. Also, theoretical analysis on spectral criticality is provided.\\n3. Overall the paper is well organized and clearly written. \\n\\nCons.\\n1. My major concern is about the experiments. The authors claim that the proposed graph learning approach is highly scalable. It would be more convincing if the authors can evaluate the proposed method on larger datasets.\\n2. One of the tasks in experiments is t-SNE visualization. There are also some faster versions of t-SNE with a complexity of O(NlogN), such as [a]. For t-SNE, the authors may justify what's the advantage of using the proposed method over other fast t-SNE algorithms.\\n[a] Accelerating t-SNE using Tree-Based Algorithms, JMLR 2014.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a scalable approach for graph learning from data. At a high-level, it begins with a k-NN graph construction, then node features are embedded to spectral space (embedding to space spanned by eigenvectors of Laplacian). Next, edges that have a large distortion are additionally added to the latest graph. And these steps are repeated until the output graph is stable (i.e., the embedding distortion becomes small). Experimental result for spectral clustering shows that the proposed method can achieve the best accuracy compared to kNN-based methods. For graph recovery, the algorithm also performs better than other Laplacian-based graph learning methods. In addition, the proposed approach runs up to 5 times faster for t-SNE.\\n\\nThe authors demonstrate that their algorithm is scalable and faster than Laplacian-based methods requiring O(N^2). However, the proposed method also requires to compute eigenvectors of Laplacian thus it seems not to be faster compared to the previous algorithm. It would be better to provide the time complexity of each step in section 3.2 and that of the overall algorithm.\\n\\nIt is unclear that the proposed algorithm (section 3.2) is optimized the objective function in equation (9). And it is possible to theoretically guarantee that the algorithm finds a spectrally optimized graph?\\n\\nFor experiments, although the authors argue that the proposed algorithm is scalable, datasets that they used are not large-scale. And it is needed to provide runtimes of other algorithms for graph recovery tasks (section 4.2).\\n\\nOverall, this paper develops a new approach, but its novelty and intuition are unclear. Moreover, it does not seem to be scalable under the bar of acceptance.\", \"minor_concerns\": \"There is no content in section 3.2.5.\"}"
]
} |
H1ls_eSKPH | Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates | [
"Leonid Butyrev",
"Georgios Kontes",
"Christoffer Löffler",
"Christopher Mutschler"
] | Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. | [
"catastrophic forgetting",
"multi-task learning",
"continual learning"
] | Reject | https://openreview.net/pdf?id=H1ls_eSKPH | https://openreview.net/forum?id=H1ls_eSKPH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"U4ysJBqtAS",
"BJeuYr72jH",
"BklI585CtH",
"rkelpoAaKS",
"BJlV6dostB"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748450,
1573823872085,
1571886734494,
1571838904134,
1571694780509
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2410/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2410/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2410/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2410/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The reviewers have provided thorough reviews of your work. I encourage you to read them carefully should you decide to resubmit it to a later conference.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Rebuttal Comment\", \"comment\": \"We would like to thank the reviewers for their feedback. Unfortunately, we are not able to address all comments to the extent and depth we would like to within the rebuttal period, but we will use the feedback as a guideline for improving on our paper and results and resubmit in the future.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method for tackling catastrophic forgetting. Similar to previous methods such as EWC (Kirkpatrick et al., 2017), they penalize parameter updates that align with the Fisher information matrix of the previous tasks. This will prevent the model from changing the previously useful parameters. They try to match the result of previous fisher-based methods but at a lower computational cost. They propose using a low-rank approximation to the Hessian using Hessian-vector-product with two types of vectors: the momentum velocity vector and the largest eigen-vector of the hessian. Then they build a diagonal approximation to the Hessian.\", \"cons\": [\"Eq 11, there is no justification for forming a curvature matrix by putting the absolute value of the hessian-vector-product with the proposed vectors on the diagonal. Particularly considering the largest eigen-value, Hv will be a vector of zeros with exactly one 1. This does not seem to be a good estimate of the hessian.\", \"Fig 1, the proposed method seem to perform poorly compared to the kfac-based method on permuted mnist.\", \"Figure 2 mainly compares to EWC as a baseline. In Farquhar & Gal (2019), other methods such as VGR perform significantly better. The proposed method is not competitive with state-of-the-art.\"]}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper focuses on alleviating the problem of \\\"catastrophic forgetting\\\", exhibited by neural networks learned with gradient-based algorithms over long sequence of tasks. In such learning scenarios, tuning of parameters over the new tasks lead to degradation of performance over the old tasks as the parameters important for the latter are overwritten. The gradient-based algorithms are unable to distinguish between the important and the not-so-important parameters of the old tasks. Hence, one direction of works, including the proposed one, aim at identifying the most important parameters for all the old tasks and discourage modifications on those parameters during the training of the new tasks.\\n\\nExisting works like Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) have proposed a Bayesian framework to lessen such forgetfulness by condensing the information of the previous tasks and supplying it as a prior for the new task. In such a framework, Ritter et al. (2018) propose a quadratic approximation of the prior which requires computing (an approximate block-diagonal Kronecker-factored) Hessian. \\n\\nThe paper employs a recent result (Ghorbani et al., 2019) to argue that most regions of the loss surface are flat. Hence, computing the Hessian in only a few regions (which exhibit high curvature) should suffice. However, computing the exact Hessian for large networks is infeasible in practice. The paper, therefore, uses Hessian-vector-product (Schraudolph, 2002; Pearlmutter, 1994), which is similar to sampling the curvature in the direction of a given vector. The key advantage of the proposed approach is the low storage requirements. Regarding how to chose a suitable direction/vector, the paper suggests two choices: the momentum vector or the eigenvector corresponding to the largest eigenvalue (of the Hessian). The motivation behind the above choices, especially the former option, is unsatisfactory. Empirically, we observe that the momentum vector is a better option than the eigenvector. However, a (theoretical/empirical) deep-dive into why momentum vector is a good candidate should be done. \\n\\nEmpirically, the proposed approach with momentum vector performs better than EWC but worse than Ritter et al. (2018). More discussion into the results (esp. Hv-momentum vs Hv-eigenvector) would have shed more light on the proposed approach.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"1. Summary:\\nThe paper considers neural network training in the continual learning setting -- data arrive sequentially and we can not revisit past data. The paper proposes an approximate Laplace\\u2019s method, in which the Hessian the log likelihood of the data is approximated by some form of Hessian-vector project (? - I will get to this question mark below). The paper considers some benchmark continual learning datasets and compares the proposed approach to EWC and Kronecker-factored online Laplace. The performance of the proposed approach is similar to that of EWC and worse than Kronecker-factored Laplace in most cases. Another sales pitch that the paper brings up a lot is the low space complexity, however this benefit has not been fully demonstrated, given the small-scale network/experiments.\\n\\n2. Opinion and rationales\\n\\nI\\u2019m leaning towards \\u201cstrong reject\\u201d as I think the presentation needs another round of polishing and that the technical contributions need to be clarified / unpacked. I explain my thinking below.\\n\\na. The presentation/explanation/flow are not clear.\\nThe abstract does not read well. For example: \\u201cThis requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian.\\u201d This sentence makes it sound like current approaches are tractable, so what this paper is trying to address? The technical summary is also not precise, the Hessian-free methods used in the paper is to compute Hessian-vector products, not the actual Hessian.\\n\\nThe introduction motivates the continual learning problem using generalisation of neural networks leading to the need for multi-task learning; however multi-task learning is not scalable given the large number of tasks and thus we need to learn sequentially. However, I find this motivation not clear: if multi-task learning and its scalability issue are the reasons why we need continual learning, with the scale of the experiments considered in the paper, wouldn\\u2019t it always more beneficial to use multi-task learning instead of continual learning?\\n\\nThe prior work section is also not clear, in my opinion. The paper starts out by describing EWC as Bayesian updates and cites MacKay (1992), then talks about the Kronecker-factored Laplace approximation as \\u201caddress this shortcoming by adopting the Bayesian online learning approach\\u201d, as if these methods are very different while in fact, these methods are some variants of the Laplace approximation, with different ways to approximate the Hessian. The issues described in section 2.2 \\u201ctwo problems that stem from eq 1\\u201d are not very clear, for example, \\u201cwithout storing the information from all previous tasks there is no easy solution to update the posterior\\u201d (?). I would follow the presentation/explanation in Ritter et al (2018), Huszar (2018) [a note on the quadratic penalty of EWC] and section 5 of the variational continual learning paper (Nguyen et al 2018) to provide a more succinct connection between these methods.\\nThe connections between this work and MAML in section 3 is not clear to me. The continual learning and meta learning settings are also quite different.\\n\\nb. The technical contribution is not clear and if correct, if of limited novelty.\\n\\nWhat is not clear from reading section 3 is what quantity is being approximated, at what point a Hessian-vector product appears and thus we can use Hessian-free methods to approximate it. The paper talks about flat loss surface and sampling a small subset of the Hessian -- I\\u2019m not sure I understand these connections. In eq 11, the paper replaces the Hessian values with results of the Hessian-vector-product approximations -- this seems very odd to me, especially in terms of semantics and units, Hessian and hessian-vector-products are two very different things. Again, it is perhaps just me not understanding what is being approximated in the first place. The technical contribution of this paper is thus limited: using Hessian-free methods to approximate Hessian-vector products in the continual learning context.\\n\\nc. The performance of the proposed method is not super exciting. Pragmatically speaking, it is not clear why practitioners should be using this in the near future given Kronecker-factored Laplace works and scales well in practice and there are a plethora of other recent methods (e.g. VCL) that are also developed from the Bayesian principle and work much better than EWC.\\n\\n\\n3. Minor details:\\n\\na. In eq 1, the denominator should be p(D_{t+1} | D_{1:t}).\\n\\nb. Figs 1 and 2, I would use the same colour scheme throughout to be consistent.\"}"
]
} |
HygcdeBFvr | Score and Lyrics-Free Singing Voice Generation | [
"Jen-Yu Liu",
"Yu-Hua Chen",
"Yin-Cheng Yeh",
"Yi-Hsuan Yang"
] | Generative models for singing voice have been mostly concerned with the task of "singing voice synthesis," i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we experiment with three different schemes: 1) free singer, where the model generates singing voices without taking any conditions; 2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and 3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices. We outline the associated challenges and propose a pipeline to tackle these new tasks. This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation. | [
"singing voice generation",
"GAN",
"generative adversarial network"
] | Reject | https://openreview.net/pdf?id=HygcdeBFvr | https://openreview.net/forum?id=HygcdeBFvr | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"JQDOodtjrq",
"Hkgfl5cior",
"r1loQVQojB",
"r1xSbxL5sH",
"rJeJyJhYjr",
"r1xO5msFir",
"BJltegoFsS",
"rkekOh5Kir",
"rkgg0l5eiB",
"B1x9SOtgjH",
"r1xCpXLgjr",
"HklxPZIMcS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748422,
1573788138196,
1573757987176,
1573703677402,
1573662423357,
1573659536512,
1573658609033,
1573657703170,
1573064903533,
1573062721825,
1573049286174,
1572131159953
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2409/AnonReviewer6"
],
[
"ICLR.cc/2020/Conference/Paper2409/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2409/AnonReviewer5"
],
[
"ICLR.cc/2020/Conference/Paper2409/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"Main content:\\n\\nBlind review #1 summarizes it well:\\n\\nhis paper claims to be the first to tackle unconditional singing voice generation. It is noted that previous singing voice generation approaches leverage explicit pitch information (either of an accompaniment via a score or for the voice itself), and/or specified lyrics the voice should sing. The authors first create their own dataset of singing voice data with accompaniments, then use a GAN to generate singing voice waveforms in three different settings:\\n1) Free singer - only noise as input, completely unconditional singing sampling\\n2) Accompanied singer - Providing the accompaniment *waveform* (not symbolic data like a score - the model needs to learn how to transcribe to use this information) as a condition for the singing voice\\n3) Solo singer - The same setting as 1 but the model first generates an accompaniment then, from that, generates singing voice\\n\\n--\", \"discussion\": \"The reviews generally point out that while a lot of new work has been done, this paper bites off too much at once: it tackles many different open problems, in a generative art domain where evaluation is subjective.\\n\\n--\", \"recommendation_and_justification\": \"This paper is a weak reject, not because it is uninteresting or bad work, but because the ambitious scope is really too large for a single conference paper. In a more specialized conference like ISMIR, it would still have a good chance. The authors should break it down into conference sized chunks, and address more of the reviewer comments in each chunk.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Regarding the evaluation metrics\", \"comment\": \"Thank you very much for your comments. As you stated in the two review comments, the evaluation of this task is difficult because it is generative art and it is a new task. Furthermore, we would like to emphasize that it is difficult also because the evaluation of singing as a type of music is fairly subjective.\\n\\nFor such a subjective task, the objective metrics could at best evaluate those that can be more objectively measured. With Average Pitch, Vocalness, and Matchness, we attempt to evaluate pitch, timbre, and harmonization, respectively, which are three important aspects of singing/music. \\n\\nAdmittedly, these objective metrics cannot fully evaluate the generated singing voices. We attempt to alleviate this situation by conducting the subjective evaluation through the two MOS user studies. These user studies include questions regarding Sound quality and Expression that could complement the objective metrics.\\n\\nRegarding the audio files used in the user study, each folder in the zip file contains a set of audios corresponding to one 20-second accompaniment. The set of audios include:\\n_accompaniment.mp3: the accompaniment audio\\nmy_singer.mp3: the audio generated by our accompanied singer\\nsinsy.mp3: the audio synthesized with Sinsy\\nsynthesizerV.mp3: the audio synthesized with Synthesizer V\"}",
"{\"title\": \"Thanks for the improvement\", \"comment\": \"Hi,\\nI appreciate the authors have tried to address the issues raised by me and other reviewers. It's not an easy decision but I will keep my original decision.\\n\\nBut maybe what's more important is the feedback reviews can provide. At least I want to believe so :) I think the added comments on evaluation metrics are helpful. However, that does not change the question of if those are approximately good ways to measure the quality of the generated samples. This is partly due to the fact that it's a new problem, but, at the end, I'd expect the authors who are suggesting a new problem would also bring up a nice initial approach to evaluate a solution. Unfortunately, I don't think this was the case. The vocalness and the average pitch does not seem to be able to assess any scenario. This work is not about simply generating a plausible voice or vocal track - it is about *singing*. Those metrics hardly consider any aspect of singing. \\n\\nI downloaded the Google Drive file but it was not very clear to figure out quickly which files are generated in which way and what aspects are expected to be evaluated precisely.\"}",
"{\"title\": \"Summary of the paper update\", \"comment\": [\"Thank you very much for all of your valuable comments. We have taken your constructive suggestions seriously and revise part of the paper accordingly. Below is a summary of the major changes:\", \"Two new baseline methods by using well-known singing voice synthesis systems, Sinsy and Synthesizer V (Described in Section 4.2)\", \"A new user study that compares our model with the two new baseline methods (Table 3 and Section 4.4)\", \"Expanding Table 1 and Section 4.3 to include the objective metrics of Sinsy baseline as well as the objective metrics computed from the training data\", \"Expanding the Introduction to further discuss the motivations and possible usages of our methods\"]}",
"{\"title\": \"Response to the Review #6\", \"comment\": \"Thank you for the valuable comments. We address the issues and questions raised by the reviewer in the following comments.\\n\\n\\n$\\\\textbf{1.}$ The motivation is slightly lacking ..., and there is a lack of discussion about which setting makes for better singing voice generation. Also, there is no comparison with other methods ...\\n\\n$\\\\textbf{Ans:}$\", \"we_have_revised_the_paper_to_address_these_issues\": \"1. Including clear motivations in Introduction\\n2. Including in Appendix some experimental results of using different GAN losses.\\n3. Expanding Section 4 by adding two baseline synthesis methods with the approach you suggested.\\n\\n\\n$\\\\textbf{2.}$ Regarding using MIR-1k\\n\\n$\\\\textbf{Ans:}$\\nMIR-1k indeed contains cleaner vocal-accompaniment pairs. However, only a few of the accompaniment tracks in MIR-1k contain piano playing. In the current setting of accompanied singer, the accompaniment has to contain piano in order to use them as the condition. Therefore, MIR-1k is not able to be used as the training data currently. \\n\\n\\n$\\\\textbf{3.}$ ... training data ... is everything Jazz? Are the accompaniments exclusively piano ...? Is there any difference between the training and test domain.\\n\\n$\\\\textbf{Ans:}$\", \"singer_models___training_data\": \"Jazz music tracks from Youtube containing both singing and piano\\nEverything is Jazz. Usually the audios in this set also contain instruments other than piano, such as drums, bass, and guitar.\", \"singer_models___testing_data\": \"Jazz music tracks from Jamendo. We collect those Jazz music tracks containing piano, but some of them also contain other instruments and vocals. Therefore, we still have to apply source separation to them to separate the piano tracks and transcribe them. The training and test data are both Jazz with piano.\", \"source_separation_model\": \"MUSDB (four sources: bass, drums, vocals, other) + 4.5 hours of solo piano from Youtube as the fifth \\u201cpiano\\u201d source. Furthermore, the \\u201cother\\u201d tracks in MUSDB that contain piano playing are removed to avoid confusion between \\u201cother\\u201d and \\u201cpiano.\\u201d\\n\\nTo make it clearer, we have added Table 5 in appendix to list the datasets and their usage. \\n\\n\\n$\\\\textbf{4.}$ Regarding the split of the validation set\\n\\n$\\\\textbf{Ans:}$\\nThe validation split contains clips randomly sampled from all the clips in the training set, so the same track can be in both the training and validation splits. We did this for the following reason. The female jazz tracks collected from Youtube are all long, ranging from 35 minutes to two hours. If we split training and validation without track overlapping, the validation split would either include a large portion of the training set or contain clips only from very few (1 or 2) tracks. Therefore, we decide to split them with track overlapping.\\n\\n\\n$\\\\textbf{5.}$ Regarding the MelNet\\n\\n$\\\\textbf{Ans:}$\\nAfter reading the paper, we found that MelNet is indeed related to our work. \\n\\nFirst, MelNet and this paper both work on time-frequency representations, so the techniques to capture structural information in the TF representations might be used in our models too. We have referred to it in the conclusion as a direction of the future work.\\n\\nSecond, MelNet and our models both accommodate the unconditional and conditional generation. The unconditional generated speeches of MelNet are especially impressive to us as the phonemes are intelligible. It shows that MelNet can capture latent properties of speech. It is a promising way for us to explore.\\n\\nThird, there is also a major difference between MelNet and this paper. MelNet is applied to piano and speech, while we explore the generation of singing voices. Singing voice generation is somewhere between speech generation and music generation, so whether the techniques in MelNet can apply to our tasks still require investigations. \\n\\n\\n$\\\\textbf{6.}$ My main criticism is in relation to the evaluation. \\n\\n$\\\\textbf{Ans:}$\\nTo address this issue, we have added two baseline synthesis methods (Sinsy and Synthesizer V) in a new MOS study. In addition, we have also expanded Table 1 to include the objective metrics of the training data.\\n\\n\\n$\\\\textbf{7.}$ \\nFor Table 1, .... explains the differences in scores for the different model settings \\n\\n$\\\\textbf{Ans:}$\\nWe have included the objective metrics computed from the training data and the Sinsy singing as well as the discussion about them.\\n\\nFurthermore, we have updated the method of computing vocalness so that now the vocalness takes into account both the vocal activation and the pitch range of singing voices. The details are in Section 4.3.\\n\\n\\n$\\\\textbf{8.}$ Tables 1 and 2, and the provided audio samples have no context, so I cant make a conclusion. If this issue and motivation was addressed I would likely vote to accept the paper.\\n\\n$\\\\textbf{Ans:}$\\nWe have updated Table 1 and added Table 3 so that our methods can be compared with other methods.\", \"audio_samples_of_other_synthesis_methods_are_also_included_in_the_paper\": \"https://bit.ly/2qNrekv\"}",
"{\"title\": \"Response to the Review #4\", \"comment\": \"Thank you for the valuable comments. We address the issues and questions raised by the reviewer in the following comments.\\n\\n\\n$\\\\textbf{1.}$ Although the paper is fairly well-structured and written, especially in the early sections, I am giving a weak reject due to its weak evaluation. \\n\\n$\\\\textbf{Ans:}$\\nThank you for the comment. That is also the concern of all the reviewers. We have largely expanded the evaluation to include two types of baselines. \\na. We have included two baseline systems of singing voice synthesis: Sinsy and Synthesizer V.\\nb. We have also computed the metrics on the training data, so that there are more references for comparison.\\n\\n\\n$\\\\textbf{2.}$ The literature review can be, although it is nicely done overall, improved, especially on the neural network based singing voice synthesis.\\n\\n$\\\\textbf{Ans:}$\\nWe have expanded literature review to include more neural network based methods in Section 5. \\n\\n\\n$\\\\textbf{3.}$ Our conjecture is that, as the output of G(\\u00b7) is a sequence of vectors of variable length (rather than a fixed-size image), compressing the output of G(\\u00b7) may have lost information important to the task.\\n\\nI am rather not convinced. The difficulty to discriminate them doesn't seem to be (strongly) related to their variable length for me because a lot of papers have, at least indirectly, dealt with such a case successfully. \\n\\n$\\\\textbf{Ans:}$\\nYou are right, and we did not mean that the failure of training with the vanilla GAN loss is due to the variable-length sequences in our training set. In fact, in the training phase, we use fixed-length sequences. What we want to express is that the compression of a sequence (whether it is variable-length or not) into a single true/false value could be a cause for the failure. \\n\\nWe have revised the whole paragraph to clarify the description. We have also added a comparison of training with GAN, LSGAN, and BEGAN in Appendix D.\\n \\n\\n$\\\\textbf{4.}$ Regarding the transposing of Wikifonia chord progressions\\n\\n$\\\\textbf{Ans:}$\\nYes, there are only 12 keys. Thank you for spotting the typo. We have revised it. \\n\\n\\n$\\\\textbf{5.}$ the paper (Lee et al. 2018) suggested that vocal activity detection in the spectrum domain is easily affected by some features such as frequency modulation. I am not sure if this feature is suitable as a measure of the proposed vocalness. \\n\\n$\\\\textbf{Ans:}$ \\nThank you very much for this comment. Indeed, (Lee et al. 2018) shows that the frequency modulation is an important factor causing high false positive rates when some types of instruments are present in the songs. However, we compute the vocalness measures on the non-silence part of the generated singing voices only, without the accompaniment, so the effect of frequency modulation might not be as serious as the scenario investigated in (Lee et al. 2018).\\n\\nOn the other hand, we also agree that there are better ways to compute the vocalness, so we have devised a way to compute the vocalness taking into account both the vocal activation and the singing pitch range. \\n\\nFor the new vocalness, we use the JDC model (https://github.com/keums/melodyExtraction_JDC) for it represents the state-of-the-art. In this model, the pitch contour is also predicted in addition to the vocal activation. If the pitch at a frame is outside a reasonable human pitch range (73~988 Hz defined by JDC), the pitch is set to 0 at that frame. We consider a frame as being vocal if it has a vocal activation >= 0.5 AND has a pitch >0. Moreover, we define the vocalness of an audio clip as the proportion of its frames that are vocal. The tool is applied to the non-silence part of an audio.\\n\\nWe have revised the paper for this modification.\\n\\n\\n$\\\\textbf{6.}$ Average pitch: We use the state-of-the-art monophonic pitch tracker CREPE (Kim et al., 2018) ...\\n\\nI am pretty sure that this is not the right way to evaluate as a metric of generated vocal track. CREPE is a neural network based pitch tracker, which means it is probably biased by the training data, .... This means, when the F0 of input is not really in the right range, CREPE might incorrectly predict somewhat random F0 within THAT range anyway. I'd say the distribution of pitch values can be interesting metric to show and discuss, but not as a metric of vocal track generation.\\n\\n$\\\\textbf{Ans:}$\\nFirst of all, we have corrected our description of the \\u201cAverage pitch\\u201d to clarify that the \\u201cAverage pitch\\u201d is computed over all the frames with confidence value >= 0.5, not over all the frames. Second, we agree that there still could be incorrectly predicted F0 due to the bias of the data used to train CREPE. In our evaluation in Section 4.3, we use them to show that the two models trained with female vocals and male vocals do exhibit different characterizations in pitch, not as an absolute metric, so we think it is still a valuable metric. \\n\\nFurthermore, we add another way of computing average pitch by using JDC, where vocalness is also taken into account to filter out non-vocal frames.\"}",
"{\"title\": \"Response to the Review #5\", \"comment\": \"Thank you for the valuable comments. We address the issues and questions raised by the reviewer in the following comments.\\n\\n$\\\\textbf{A.1.}$ \\u201cWe adapt the objective of BEGAN, which is originally for generating images, for generating sequences.\\u201d: The original BEGAN paper(Berthelot et al., 2017) did not address sequence modeling. \\n\\n$\\\\textbf{Ans:}$ \\nYes, with that sentence we did mean that \\\"BEGAN is originally proposed for generating images, not sequences. Therefore, to make it generate sequences, we have modified/adapted the objective of the BEGAN model\\\". We realize that the sentence (in the original version of our submission) may be confusing. To avoid misunderstanding, we have revised the whole paragraph for clarification. \\n\\n\\n$\\\\textbf{A.2.}$ \\u201cAs for the loss function of D(\\u00b7), our pilot experiments show that \\u2026\\u201d: This hand-wavy argument is unacceptable. The authors should be able to support all of the claims they\\u2019ve made, which sometimes require experimental results. \\n\\n$\\\\textbf{Ans:}$\\nWe have added a comparison of BEGAN, GAN, and LSGAN in Appendix D. It shows that BEGAN achieves the best convergence. Furthermore, we also show the objective metrics of the three Gan losses, and see that BEGAN achieves the best vocalness among the three GAN losses in Table 6.\\n\\n\\n$\\\\textbf{A.3.}$ \\u201cOur conjecture is that, \\u2026 compressing the output of G(\\u00b7) may have lost information important to the task.\\u201d: PatchGAN (Isola et al., 2017) had already addressed this issue. The authors may want to cite PatchGAN to support their conjecture or compare against PatchGAN to show their own architecture\\u2019s strength.\\n\\n$\\\\textbf{Ans:}$\\nWe have revised the paragraph of \\u201cOur conjecture is that...\\u201d for clarification and the sentence has been removed. However, after reading the paper, we do think that the approach of PatchGAN does share some similar rationale with BEGAN, so we have referred to it in the conclusion as a possible direction for the future work.\\n\\n\\n$\\\\textbf{A.4.}$ \\u201cWe like to investigate whether such blocks can help generating plausible singing voices in an unsupervised way.\\u201d: No ablation studies on GRUs and dilated convolutions are found. If the authors mean that they\\u2019re willing to do such studies in the future, \\u201cwhat\\u2019s done here\\u201d and \\u201cwhat will be done in the future\\u201d should be easily distinguished within the text.\\n\\n$\\\\textbf{Ans:}$\", \"we_have_rephrased_it_as_follows\": \"For it has demonstrated its capability in source separation, we adopt it as a building block of the singer models. \\n\\n\\n$\\\\textbf{B.1.}$ The readers won\\u2019t be able to estimate the strength of the proposed method by looking at table 1 and 2. I suggest doing one of the following: include results from other baselines to compare against or give a brief description of the metrics with typical values. (e.g. values shown in appendix A.3)\\n\\n$\\\\textbf{Ans:}$\\nWe have included both of them in the revised version.\\nIn Section 4, two synthesis baselines with Sinsy and Synthesizer V are included, and the typical values computed on the training data are provided.\\n\\n\\n$\\\\textbf{B.2.}$ Are the neural network architecture components described in section 3.1 used for both source separation and the synthesis network?\\n\\n$\\\\textbf{Ans:}$\\nFor the source separation, we use the original design by (Liu & Yang, 2019) with weight normalization for convolution layers. For the generation network, we replace the weight normalization with group normalization for convolution layers.\\n\\n\\n$\\\\textbf{B.3.}$ To make readers easily understand the contribution of this paper, there should be a detailed description of the limitation of this work. I suggest to move the details of experiments in section 4 to the appendix, but it may depend on the authors\\u2019 writing style.\\n\\n$\\\\textbf{Ans:}$\\nWe have revised Introduction so that the motivation and the scope of this work should be clearer in the revised version.\\n\\n\\n$\\\\textbf{B.4.}$ The \\u2018inner idea\\u2019 concept in the \\u201csolo singer\\u201d setting looks vague and contradicts with the main topic since it uses chord sequences to synthesize singing voice.\\n\\n$\\\\textbf{Ans:}$\\nBy score-free we mean that the model is not asked to strictly follow a pre-assigned score that contains pitch information. In the solo singer, the generated singing voices are not asked to follow the pitch notes in the chord sequence, so we still consider it to be a score-free approach.\\n\\nThe general \\u201cinner idea\\u201d concept is indeed vague in the paper, but it is vague because there could be many options of it. In the paper, we instantiate the inner idea by setting it to be a chord sequence, which provides the readers an example of what it could be and hopefully make it less vague.\\n\\n\\n$\\\\textbf{B.5.}$ Things to improve the paper that did not impact the score:\\n1. \\u201cimprov\\u201d >> \\u201cimprovement.\\u201d\\n\\n\\n$\\\\textbf{Ans:}$\\nThank you for pointing this out. We have corrected it.\"}",
"{\"title\": \"Response to the Review # 1\", \"comment\": \"Thank you for the valuable comments. We address the issues and questions raised by the reviewer in the following comments.\\n\\n$\\\\textbf{1.}$ There is a good amount of literature on generating MIDI representations. One can simply generate MIDI (conditioned or unconditioned), and then give the result to a vocaloid like software. I am voting for a weak rejection as there is no comparison with any baseline. If you can provide a comparison with a MIDI based generation baseline, I can reconsider my decision. \\n\\n$\\\\textbf{Ans:}$\\nOne of the goals in this paper is to generate singing voice given an accompaniment, but generating a singing melody MIDI given an accompaniment is not a trivial task. To the best of our knowledge, there are many researches on generating melodies, piano solos, and scores of several instruments, but very few researches, if any, work on generating singing melodies given an accompaniment. Our model is one way to achieve the goal of generating singing given accompaniment without the intermediate MIDI file. \\n\\nAs suggested by the Reviewer 1, we have added two synthesis baselines to the paper, and have revised Section 4 accordingly. The two baselines are based on the well-known singing voice synthesis tools, Sinsy and Synthesizer V, that are publicly accessible. \\n\\n\\n$\\\\textbf{2.}$ Or, explain to me why training on raw waveforms like you do is more preferable. I think in the waveform domain may even be undesirable to work with, as you said you needed to do source separation, before you can even use the training data. This problem does not exist in MIDI for instance. \\n\\n$\\\\textbf{Ans:}$\\nWe believe that our score-lyrics-free approach and the score-lyrics-based (with both score and lyrics) approach are for different situations, so one approach is not preferable than the other in general. \\n\\nThe differences between these two approaches is in the types of the input conditions. The score-lyrics-based approach takes scores and lyrics as the condition. In contrast, our approach can take less strict and more diverse conditions. In this paper, we have demonstrated models that take no conditions, accompaniment conditions, or chord conditions. \\n\\nWhat we propose in this paper is that the conditions used in the singing voice generation systems do not need to be as strong as the existing systems. Score-lyrics-based systems are good at synthesizing singing voices when you want the system to synthesize exactly what you want, while our approach would do better when you want the machine to add some singing voices to your composition without designating the strict scores and lyrics.\\n\\nWe have also revised the Introduction to further discuss the motivations of using our approaches.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #6\", \"review\": \"This paper claims to be the first to tackle unconditional singing voice generation. It is noted that previous singing voice generation approaches leverage explicit pitch information (either of an accompaniment via a score or for the voice itself), and/or specified lyrics the voice should sing. The authors first create their own dataset of singing voice data with accompaniments, then use a GAN to generate singing voice waveforms in three different settings:\\n1) Free singer - only noise as input, completely unconditional singing sampling\\n2) Accompanied singer - Providing the accompaniment *waveform* (not symbolic data like a score - the model needs to learn how to transcribe to use this information) as a condition for the singing voice\\n3) Solo singer - The same setting as 1 but the model first generates an accompaniment then, from that, generates singing voice\\n\\nFirstly, the authors have done a lot of work - first making their own data, then designing their tasks and evaluating them. The motivation is slightly lacking - it is not clear why we are interested in these three task settings i.e. what we will learn from a difference in their performance, and there is a lack of discussion about which setting makes for better singing voice generation. Also, there is no comparison with other methods: whilst score data is not available it could be estimated, then used for existing models, providing a nice baseline e.g. first a score is extracted with a state of the art AMT method, then a state of the art score to singing voice generation method could be used.\\n\\nThere are existing datasets of clean singing voice and accompaniment, for example MIR-1k (unfortunately I think iKala, another dataset, is now unavailable). It is true that this dataset is small in comparison to the training data the authors generate, but it will certainly be cleaner. I would have liked to see an evaluation performed on this data as opposed to another dataset which was the result of source separation (the authors generate a held out test set on Jazz from Jamendo, on which they perform singing voice separation).\\n\\nI also had questions about the training data - there is very little information about it included other than it is in-house and covers diverse musical genres (page 6 under 4.1), a second set of 4.5 hours of solo piano, and a third set (?) of jazz singers. This was a bit confusing and could do with clarification. At minimum, I would like to know what genre we are restricting ourselves to - is everything Jazz? Are the accompaniments exclusively piano (it's alluded that the answer to this is no, but it's not clear to me)? Is there any difference between the training and test domain.\\n\\nOn page 6, second to last paragraph when discussing the validation set, I would like the sampling method to be specified - it makes a difference whether the same piece of music will be contained within both the training and validation split, or whether the source piece (from which the 10 second clips are extracted) are in separate splits <- I'd recommend that setting.\\n\\nThe data used to train the model will greatly affect my qualitative assessment of the provided audio samples so, without a clear statement on the training data used, I can't really assess this.\\n\\nHowever, with respect to the provided audio samples, I'd first note that these are explicitly specified as randomly sampled, and not cherry picked, which is great, thank you. However, whilst I would admit that the domain is different, when the singing samples are compared with the piano generation unconditional samples of MelNet (https://audio-samples.github.io/#section-3), which I would argue is just as hard to make harmonically valid, they are not as harmonically consistent, even when an accompaniment has been provided. However, samples do sound like human voice, and the pitch is relatively good. The words are unintelligible, but this is explicitly specified as out of scope for this paper, and I agree that this is much harder to achieve.\\n\\nAs an aside, MelNet is not cited in this paper and, given the similarity and relevance, I think it probably should be - https://arxiv.org/abs/1906.01083. It was published this year however so it would be a little harsh to expect it to be there. I would invite the authors to rebut this claim if they think the methods are not comparable.\\n\\nMy main criticism is in relation to the evaluation. For Table 2, without a baseline or the raw data (which would have required no further effort) included in the MOS study, it's very difficult to judge success. If the authors think that comparison with raw data is unfair (as it is an embryonic task) they could include a model which has an unfair advantage from the literature - e.g. uses extracted score information. \\n\\nFor Table 1, I appreciate the effort that went into the design of 'Vocalness' and 'Matchness' which are 'Inception Score' type metrics leaning on other learned models to return scores. I would like to see discussion which explains the differences in scores for the different model settings (there is a short sentence at the bottom of page 7, but nothing on vocalness).\\n\\nIn summary - this is a hard problem and the authors are the first to tackle it. The different approaches to solve the problem are not well motivated. However, the models are detailed, and well explained. Code is even provided, but data for training is not. If the authors were able to compare with a baseline (like that I describe above), it would go a long way to convincing me that this was good work. As it stands, Tables 1 and 2, and the provided audio samples have no context, so I cant make a conclusion. If this issue and motivation was addressed I would likely vote to accept the paper.\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"1. p2 \\\"we hardly provide any labelled data\\\" specify whether you do or not (I think it's entirely unsupervised since you extract chord progressions and pitch curves using learned models...)\\n2. p2 \\\"...may suffer from the artifact\\\" -> the artefacts\\n3. p2 \\\"for the scenario addressed by the accompanied singer\\\" a bit clumsy, may be worth naming your tasks 1, 2 and 3 such that you can easily refer to them\\n4. p2 \\\"We investigate using conditional GAN ... to address this issue\\\" - which issue do you mean? If it is the issue specified at the top of the paragraph, i.e. that there are many valid melodies for a given harmony (no single ground truth), I don't think using a GAN is a *solution* to this per se. It is a valid model to use, and the solution would be enough varied data (and evaluation to show you're covering your data space and haven't collapsed to a few modes)\\n5. p2 \\\"the is no established ways\\\" -> there are no established ways\\n6. p3 \\\"Discriminators in GAN\\\" -> in the GAN\\n7. p6 \\\"piano playing audio on our own...\\\" -> piano playing on its own (or even just rephrase the sentence - collect 4.5 hours of audio of solo piano)\\n8. p7 \\\"We apply source separation to the audios divide them into ...\\\" -> we apply source separation to the audio data then divide each track into 20 second...\\n9. p7 If your piano transcription model was worse than Hawthorne, why didn't you use it? It would have been fine to say you can't reproduce their model if it is not available, but instead you say that 'according to out observation [it] is strong enough' which comes across quite weakly.\\n10. p8 \\\"in a quiet environment with proper microphone volume\\\" -> headphone volume?\\n11. p8 \\\"improv\\\" - I think this sentence trailed off prematurely!\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper tries to addresses an interesting problem of generating singing voice track under three different circumstances. Some of the problems that this paper deals with is a new problem and introduced first in this paper, which could be a contribution as well.\\n\\nAlthough the paper is fairly well-structured and written, especially in the early sections, I am giving a weak reject due to its weak evaluation. Evaluation is almost always difficult when it comes to generative art, but I am slightly more concerned than that. The literature review can be, although it is nicely done overall, improved, especially on the neural network based singing voice synthesis.\\n\\nI appreciate the authors tried to find a great problem and provided a good summary of the literature. Successfully training this kind of network itself is already tricky. It is also nice to see some interesting approaches towards objective evaluation. \\n\\nBelow are my comments. \\n\\n> Our conjecture is that, as the output of G(\\u00b7) is a sequence of vectors of variable length (rather than a fixed-size image), compressing the output of G(\\u00b7) may have lost information important to the task.\\n\\nI am rather not convinced. The difficulty to discriminate them doesn't seem to be (strongly) related to their variable length for me because a lot of papers have, at least indirectly, dealt with such a case successfully. \\n\\n> For data augmentation, we transpose the chord progressions found in Wikifonia to 24 possible keys\\n\\nWhat do you mean by 24 keys? I think there should be only 12 keys. \\n\\n> Vocalness measures whether a generated singing voice audio sounds like vocals. We use the singing voice detection tool proposed by Leglaive et al. (2015) and made available by Lee et al.(2018).\\n\\nActually, the paper (Lee et al. 2018) suggested that vocal activity detection in the spectrum domain is easily affected by some features such as frequency modulation. I am not sure if this feature is suitable as a measure of the proposed vocalness. The computed vocalness may provide more information if they are computed on other tracks (e.g., guitar, cello, drums, etc).\\n\\n> Average pitch: We use the state-of-the-art monophonic pitch tracker CREPE (Kim et al., 2018)8 to compute the pitch (in Hz) for each frame. The average pitch is computed by averaging the pitches of all the frames.\\n\\nI am pretty sure that this is not the right way to evaluate as a metric of generated vocal track. CREPE is a neural network based pitch tracker, which means it is probably biased by the training data, where the pitch values mostly range in that of common musical instruments/voices. This means, when the F0 of input is not really in the right range, CREPE might incorrectly predict somewhat random F0 within THAT range anyway. I'd say the distribution of pitch values can be interesting metric to show and discuss, but not as a metric of vocal track generation.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper has set a new problem: singing voice synthesis without any score/lyrics supervision. The authors provide a significance of such a problem in section 1. Also, the authors successfully design and implement a novel neural network architecture to solve the problem. It\\u2019s also notable that the authors kindly open-source their code to mitigate the reproducibility issue. This paper may serve as baseline results for the proposed problem in the future.\\n\\nDespite the significance of the problem and the novelty of the solution, this paper aims to solve too many problems at once. Unfortunately, some main ideas were not supported by experimental results or logical arguments with appropriate citations.\\n\\nThe authors seem to overly focus on the task itself, and thus haven\\u2019t pay much attention on supporting their choice of neural network architecture. Here are some points regarding this:\\n\\n1. \\u201cWe adapt the objective of BEGAN, which is originally for generating images, for generating sequences.\\u201d: The original BEGAN paper(Berthelot et al., 2017) did not address sequence modeling. \\n2. \\u201cAs for the loss function of D(\\u00b7), our pilot experiments show that \\u2026\\u201d: This hand-wavy argument is unacceptable. The authors should be able to support all of the claims they\\u2019ve made, which sometimes require experimental results. \\u201cG(\\u00b7)\\u201d of the following sentence should be D(\\u00b7).\\n3. \\u201cOur conjecture is that, \\u2026 compressing the output of G(\\u00b7) may have lost information important to the task.\\u201d: PatchGAN (Isola et al., 2017) had already addressed this issue. The authors may want to cite PatchGAN to support their conjecture or compare against PatchGAN to show their own architecture\\u2019s strength.\\n4. \\u201cWe like to investigate whether such blocks can help generating plausible singing voices in an unsupervised way.\\u201d: No ablation studies on GRUs and dilated convolutions are found. If the authors mean that they\\u2019re willing to do such studies in the future, \\u201cwhat\\u2019s done here\\u201d and \\u201cwhat will be done in the future\\u201d should be easily distinguished within the text.\", \"some_miscellaneous_points_worth_noting\": \"1. The readers won\\u2019t be able to estimate the strength of the proposed method by looking at table 1 and 2. I suggest doing one of the following: include results from other baselines to compare against or give a brief description of the metrics with typical values. (e.g. values shown in appendix A.3)\\n2. Are the neural network architecture components described in section 3.1 used for both source separation and the synthesis network?\\n3. To make readers easily understand the contribution of this paper, there should be a detailed description of the limitation of this work. I suggest to move the details of experiments in section 4 to the appendix, but it may depend on the authors\\u2019 writing style.\\n4. The \\u2018inner idea\\u2019 concept in the \\u201csolo singer\\u201d setting looks vague and contradicts with the main topic since it uses chord sequences to synthesize singing voice.\", \"things_to_improve_the_paper_that_did_not_impact_the_score\": \"1. \\u201cimprov\\u201d >> \\u201cimprovement.\\u201d\\n\\nThis paper should be rejected because (1) the paper failed to justify the main idea and results, (2) the amount of literature research was not enough, (3) too many problems were addressed at once, and (4) the writing is not clear enough.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, authors explore the problem of generating singing voice, in the waveform domain. There exists commercial products which can generate high fidelity sounds when conditioned on a score and or lyrics. This paper proposes three different pipelines which can generate singing voices without necessitating to condition on lyrics or score.\\n\\nOverall, I think that they do a good job in generating vocal like sounds, but to me it's not entirely clear whether the proposed way of generating melody waveforms is an overkill or not. There is a good amount of literature on generating MIDI representations. One can simply generate MIDI (conditioned or unconditioned), and then give the result to a vocaloid like software. I am voting for a weak rejection as there is no comparison with any baseline. If you can provide a comparison with a MIDI based generation baseline, I can reconsider my decision. Or, explain to me why training on raw waveforms like you do is more preferable. I think in the waveform domain may even be undesirable to work with, as you said you needed to do source separation, before you can even use the training data. This problem does not exist in MIDI for instance.\"}"
]
} |
Byeq_xHtwS | Neural Video Encoding | [
"Abel Brown",
"Robert DiPietro"
] | Deep neural networks have had unprecedented success in computer vision, natural language processing, and speech largely due to the ability to search for suitable task algorithms via differentiable programming. In this paper, we borrow ideas from Kolmogorov complexity theory and normalizing flows to explore the possibilities of finding arbitrary algorithms that represent data. In particular, algorithms which encode sequences of video image frames. Ultimately, we demonstrate neural video encoded using convolutional neural networks to transform autoregressive noise processes and show that this method has surprising cryptographic analogs for information security. | [
"Kolmogorov complexity",
"differentiable programming",
"convolutional neural networks"
] | Reject | https://openreview.net/pdf?id=Byeq_xHtwS | https://openreview.net/forum?id=Byeq_xHtwS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"QEh3-eRxB",
"SJl7REkMcr",
"ryg6ZC3jYB",
"SJgISm7mYS"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748393,
1572103370727,
1571700228543,
1571136318158
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2408/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2408/AnonReviewer3"
],
[
"ICLR.cc/2020/Conference/Paper2408/AnonReviewer1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper has several clarity and novelty issues.\", \"title\": \"Paper Decision\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper introduces convolutional neural networks to encode video sequences. In particular, it exploits an autoencoder style convolutional network to compress videos with certain ratios by modulating the number of convolution layers. Furthermore, it studies an autoregressive model to learn the sequential information over videos.\", \"strength\": \"The idea of using convolutional networks to encode the videos is interesting. \\nThe used autoencoder and autoregressive techniques are promising for video encoding.\", \"weakness\": \"The paper gives me an impression that there are very few works to apply convolutional networks (including autoencoder with autoregressive processing) to encode videos. I cannot see any existing works sharing the same motivation, and the paper does not evaluate any related works for comparison. But it seems the use of autoencoder and autoregressive techniques look very straightforward to me. Please clarify the concern. \\n\\nAbout technical contribution, it is not clear to see if there are any improvement over the conventional autoencoder and autoregressive models. It would be better to make some necessary discussions in the paper. In addition, it seems no descriptions showing how to learn the bijective mappings f_1, ..f_j.\\n\\nRegarding the evaluation, the paper uses visual results and multi-scale SSIM (mssim) to study the effectiveness of the proposed method on the KTH and DAVIS\\u201917 datasets. While a small number of video frames are used, it would be necessary to evaluate the overall (average) mssim on the whole dataset. \\n\\nThere are quite a few typos or grammatical problems on sentences such as \\u201c...algorithms which encode sequences of video image frames.\\u201d, \\u201cf is some convolutional neural network\\u2026\\u201d, \\u201canalogous to the use of key-frames in standard compression algorithms.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"I'm not sure what the contribution of this paper is. It seems to contain a variety of weakly related motivational examples. The paper begins by stating the computer vision advanced due to \\\"ability to search for suitable task algorithms via differentiable programming.\\\" But I don't think this claim is reasonable, i.e., I don't think CNNs are searching for task algorithms.\\n\\nThen the paper explains Kolmogorov complexity, but this is completely unrelated to the rest of the paper. At no point in this paper is this concept used and this motivation is not useful.\\n\\nThe paper then introduces the method, which is a standard, simple autoencoder. Some image evaluations of this are shown, but no contribution in terms of the model is made.\\n\\nFinally, the paper briefly mentions some connections to cryptography, but it is unclear what or why this connection matters.\\n\\nThe paper has no real experimental analysis, it doesn't seem to propose anything new or make any clear contribution, and overall, is quite unclear on what it is trying to do. The current paper is unsuitable for publication.\"}",
"{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors show that CNNs are somewhat able to compress entire videos within their parameters that can be reconstructed by an autoregressive process. This is an interesting idea that has been explored in a different context before (e.g., Deep Image Prior, Ulyanov et al. 2017). There is also plenty of work in the area of exploiting NNs for video compression/encoding (see [1] for instance). However, a bit unusual is the choice of compressing a video into network parameters, which is quite an expensive process using backprop. I could not find any motivation for why this would be a good idea, potentially because the paper does not state any explicit goal/contribution. In any case the authors show merely some mediocre qualitative results. There are no comparisons to prior work and no empirical results. The combination of methods also seems a bit arbitrary.\\n\\nTherefore this paper has no valuable contribution in its current form and I vote for rejection.\\n\\nMajor issues\\n- No empirical results\\n- No comparison to any baseline or prior work\\n- Confusing: the paper has almost no structure, e.g., there are almost no sections at all. Symbols are not used consitently (e.g., functions f and g).\\n\\nQuestions\\n- What's the goal of employing normalizing flows in this paper?\\n\\n[1] Image and Video Compression with Neural Networks: A Review. Ma et al. 2019.\"}"
]
} |
H1lK_lBtvS | Classification-Based Anomaly Detection for General Data | [
"Liron Bergman",
"Yedid Hoshen"
] | Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method, GOAD, to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains. | [
"anomaly detection"
] | Accept (Poster) | https://openreview.net/pdf?id=H1lK_lBtvS | https://openreview.net/forum?id=H1lK_lBtvS | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"_MHicfd1ezs",
"GNvMyUyfbXL",
"PgFDXBA3cl",
"HJeld8X2iB",
"BJgsGImnsr",
"SJgKg873oS",
"HJgOSSS6Fr",
"HkgZFGqjKH",
"SJx_brSjKH"
],
"note_type": [
"official_comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1587882772670,
1587443012602,
1576798748364,
1573824104003,
1573824018934,
1573823984679,
1571800383964,
1571689081051,
1571669248024
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Paper2406/Authors"
],
[
"~Thomas_G_Dietterich1"
],
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2406/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2406/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2406/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2406/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2406/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2406/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Response\", \"comment\": \"Thank you for your interest in our work!\\n\\nThe hypothesis that the affine transformations do something similar to LODA is very interesting, however we have some evidence to the contrary. Our method is not only successful when W is a projection matrix (and b=0). It is typically successful when W is a random permutation matrix. It also works in most cases when W is replaced by a random diagonal matrix (so again not projection). Additionally, it works when W=I and a random offset vector b (one can think of it as an additive source separation identification task) - the addition task (W=I, non-zero b) works well with numeric data however it has issues with categorical data (when x is mostly zeros, identifying which transformation b becomes too easy). The trends that we see are typical of RotNet-like methods, the auxiliary task should be not too easy nor too hard. So although averaging over nuisance factors might be contributing a bit, we believe predicting transformations provides the main contribution. We do agree that our method is only as good as the relations present between the variables - so if indeed a bag of words, linear model or a small number of variables are sufficient, classical methods would perform well, whereas if more complex relations need to be learned then deep methods are a good option (cifar10 treated as a tabular dataset is such an example). We also believe that these approaches can benefit from tabular-specific deep architectures over the fully-connected architecture used here, this is a direction for future research.\", \"more_detailed_response\": \"In followup research, we run Golan & El-Yaniv when the normal data contains multiple (unlabeled) classes and performance was indeed reduced, so we assume our method will have a similar behavior. We are currently working on improving our method for this setting.\", \"other_baselines\": \"On your suggestion, we evaluated Isolation Forest (IF) and LODA. IF performs very well on the smaller datasets (better than our method), LODA performs comparably to the baseline. This is not too surprising as we suffer from overfitting for small datasets. On KDD and KDDRev (which have more data) - our method significantly outperforms both. We expect that our method will achieve better results for more complex, large datasets.\\n\\nProtocol - We compared using the protocol in DAGMM, Zong et al., ICLR'18. We agree the protocol has the limitation that you mentioned, however the F1 score is the metric reported by that paper. The source of randomness in the tabular experiments is the split of normal data used for training and testing (and network initialization). The errors are standard deviations over the mentioned number of runs. The source of randomness in image experiments is the network initialization - in line with Golan and El-Yaniv. \\n\\nMargin - we did not use a principled procedure (or an exhaustive search over the margin values), we tried one or two values. The results did not seem to be very sensitive to them.\\n\\nOpen vs closed set methods - Intuitively, we mean that with closed-set softmax training, we are not able to say in advance that anomalous test data will be classified with high-probability as one of the transformations (correct or incorrect) or have equal probabilities for all transformations - whereas with openset training, anomalous data (with deep representations that are sufficiently different from normal) should have equal probabilities for all transformations.\"}",
"{\"title\": \"Some notes and questions for the authors\", \"comment\": \"We enjoyed reading this paper in my research group. This general paradigm, of training on auxiliary tasks, is an important one, and it is nice to see work building on Golan & El Yaniv's paper. Congratulations!\\n\\nWe have several questions, and we thought asking them publicly would allow you to answer them publicly, which would benefit everyone. \\n\\n1. How well does the method work as the number of \\\"known\\\" classes is increased? Other researchers, including in our group, have found that many AD and open category methods break down as the number of known classes gets large. \\n\\n2. In our experiments [1], we have found that the Isolation Forest [2] is an excellent anomaly detector for feature vector data. It is of course vastly cheaper to train than your method. We encourage you to compare against it. Note that you should vary the hyperparameters (number of trees, subsampling size), because as the dimensionality of the data increases, the number of trees should increase. \\n\\n3. A method that also employs random projections is LODA [3]. It is also extremely efficient. You should compare against it as well. \\n\\n4. The KDD1999 dataset is extremely simple. Virtually all anomalies can be detected via one-dimensional marginal distributions [4]. It should not be used any more. \\n\\n5. What is the meaning of the \\\\pm intervals in Tables 1 and 2? Are these 95% confidence intervals on AUC? If so, how were they computed? What source of variation is being controlled for?\\n\\n6. Why did you report F1 in Table 3 as opposed to AUC? I have never encountered a real anomaly detection application for which F1 is a sensible measure. The usual goal is to detect, say, 99% of anomalies while minimizing false alarms, so Precision @ 99% Detection (Recall) is a good metric. There are some applications where one wants very low false alarms, in which case Recall @ 1% False Alarms is an appropriate metric. You also should provide confidence intervals. You do report $\\\\sigma$: What does this mean and how is it computed? What source of variation is being measured?\\n\\n7. You did not actually answer Blind Reviewer #3's question about how you set s (although Table 4 is helpful)\\n\\n8. On p. 3, we could not understand the paragraph that begins \\\"A significant issue with this methodology\\\". Why would a learned classifier only be valid for samples from the training set? What does it mean for the predicted probabilities to have high variance? You aren't using variance as an anomaly score. We were also confused by the following paragraph that says \\\"training P(m|T(x,m)) = 1/M\\\". We think you mean training on a loss function to encourage the predicted distribution to be uniform across the transformations. In general, the notation is non-standard and very confusing.\", \"general_comment\": \"You don't use the full power of affine transformations (you set b=0), and it isn't clear what benefit the constant offset b would provide. We think it would be more accurate to refer to your technique as using random low-dimensional projections. We suspect that the reason that these work is different from the reason that the geometric transformations of Golan & El Yaniv work. Their transformations require the latent representation to capture some important semantic information about the task, whereas the random projections require the classifier to just preserve random information. For image data, a major challenge is \\\"nuisance novelty\\\" caused by unimportant changes in the background. Such changes are unlikely to be useful for predicting the geometric transformations, whereas intrinsic properties of the objects will be useful. \\n\\nIn contrast, in featurized data, the features are already meaningful. The main issue is sometimes that there are irrelevant features. This is why the sparse random projections of LODA and the random splitting of Isolation Forest are useful. By looking at lower-dimensional interactions, they are less likely to be fooled by global nuisance novelty novelty caused by unique combinations of noisy, irrelevant variables. One way to test this hypothesis might be to use sparse random projections in your method (i.e., where you select a subset of the features and then project them to a space of similar dimensionality). Another experiment would be to add irrelevant features.\", \"references\": \"[1] A Meta-Analysis of the Anomaly Detection Problem. Emmott, Das, Dietterich, Fern, Wong. arXiv 1503.01158\\n[2] Liu, F. T., Ting, K. M., & Zhou, Z.-H. (2008). Isolation Forest. 2008 Eighth IEEE International Conference on Data Mining, 413\\u2013422. https://doi.org/10.1109/ICDM.2008.17\\n[3] Pevn\\u00fd, T. (2015). Loda: Lightweight on-line detector of anomalies. Machine Learning, November 2014. https://doi.org/10.1007/s10994-015-5521-0\\n[4] Siddiqui, M. A., Fern, A., Dietterich, T. G., & Wong, W.-K. (2015). Sequential Feature Explanations for Anomaly Detection. Proceedings of ODD 2015. http://arxiv.org/abs/1503.00038 . See Figure 4\"}",
"{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents a method that unifies classification-based approaches for outlier detection and (one-class) anomaly detection. The paper also extends the applicability to non-image data.\\n\\nIn the end, all the reviewers agreed that the paper makes a valuable contribution and I'm happy to recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reponse\", \"comment\": \"We thank the reviewer for the dedicated and mostly positive review. We are pleased the reviewer recognized that our approach is interesting and well motivated and that it convincingly outperforms the state-of-the-art competitors on standard benchmarks.\\n\\nWe sincerely apologize that the editorial quality of the paper was not of the high standard that the reviewer naturally expected. We have significantly revised and improved it, including all the stylistic issues the reviewer highlighted. We believe the quality is now of a high standard.\\n\\n\\u201cconsider at least another image dataset besides CIFAR-10, e.g. Fashion-MNIST\\u201d: The results for FashionMNIST were added to the paper in the appendix. Overall our method achieves the best performance of all methods.\\n\\n\\u201cOn CIFAR-10, do you also consider geometric transformations\\u201d: We are using exactly the same geometric transformations as Golan and El-Yaniv [4]. As noted in the paper, geometric transformations are a special case of the affine transformation class. For CNNs to be maximally effective the transformation needs to be locality preserving. Using random affine matrices for images classified by CNNs did not perform competitively as it removed pixel locality information exploited by CNNs. This is different for tabular data where there is no order between different features, making random matrices a good choice of transformation. We updated this insight in the manuscript.\\n\\n\\u201chow do deep networks perform in contrast to the final linear classifier reported on most datasets?\\u201d: Results of deep classifiers are significantly better than linear methods for KDD, KDDRev which are data rich (linear accuracy forKDDCup99, KDDRev was around 80%). Results for deep classifiers are roughly similar to linear classifiers for data poor tasks (Thyroid, Arrhythmia).\\n\\n\\u201censemble baselines [2] should be considered\\u201d: we compared our method an ensemble baseline similar to [2] (implemented by the PyOD package) in exactly the same setting as our experiments. Results can be seen in the appendix. Our method outperforms the ensemble method on the tested datasets. The results are most remarkable on the larger datasets, on which deep classifiers have a distinct advantage.\\n\\n\\u201cTable 2 also seems incomplete with the variances missing\\u201d: Results are copied from Zong et al. did not contain variance, the variance values for these methods are missing in the table. All methods that we ran report variance results. We revised the paper to clarify this.\\n\\n\\u201cHow many transformations do you consider on the specific datasets?\\u201d: A graph with the accuracies for all datasets is shown in the appendix. Above a certain threshold the number is not critical, the reported experiments used 32 transformations were used for all datasets but Arrhythmia which used 64 (due to its high variance owing to its small size). We present in the appendix results for a larger number of transformations on the smaller datasets (1024 on Arrhythmia and Thyroid) with performance increases on these smaller datasets.\\n\\n\\u201cHow is hyperparameter s chosen?\\u201d: the value is not very sensitive. We found that a value of s=1.0 performed well in all datasets and is the recommended starting point. Although originally we ran Cifar10 using s=0.1, we present the same experiment with s=1.0 in the appendix with very similar numbers.\\n\\nWe followed the further ideas for improvement proposed by the reviewer. The style and editorial quality of the paper were much improved. The requested experiments were added. We will add further tabular experiments in the final version of the paper. The clarifications requested by the reviewer were added. We also clarified the \\u201cClassification\\u201d and \\u201cSelf-supervised\\u201d terms.\\n\\nWe are thankful for the reviewer\\u2019s detailed comments, which we believe have improved the paper.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the dedicated and positive review and are pleased the reviewer recognized the novelty of the approach, its state-of-the-art performance and computational scalability.\\n\\nAs requested by the reviewer, pseudo code for the algorithm was added to the paper.\\n\\nThe labels were indeed mislabeled in Fig.1, our approach achieved the better performance. We updated the figure to elucidate all issues brought to our attention by the reviewer.\\n\\nWe run the \\u201cnumber of tasks\\u201d experiment on the other datasets, they are shown in the appendix. For all datasets, increasing the number of tasks increases performance up to a certain point. From this point increasing the number of tasks mainly decreases variance between runs. For the smaller datasets, accuracy improves up to a higher number of transformations. We presented the results with the maximal number of transformations in the appendix. We elucidated the text related to this experiment.\\n\\nWe coined the acronym GOAD and use it for our method in the text. Thank you for this helpful suggestion.\"}",
"{\"title\": \"Reponse\", \"comment\": \"We thank the reviewer for the dedicated and positive review. We are pleased that the reviewer recognized the novelty and strong performance of our method across many data types, the novel adversarial robustness that it brings and its scalability across different computational regimes.\\n\\nWe presented the contamination results on KDDRev as this was the comparison made in the DA-GMM paper. To address the reviewer\\u2019s request for further experiments on contaminated data, we computed the results on the other datasets (Thyroid did not have enough anomalies to perform this experiment). The graphs are presented in the appendix. The trend is similar to that observed in the KDDRev experiment.\\n\\nThe type of transformation affects the results but not very significantly. We present results in the appendix for the affine transformation restricted to (i) permutation and (ii) rotation matrices. The results in line with the full affine transformation (typically a little lower). We have previously experimented with using randomized neural networks as the auxiliary transformations however in our preliminary experiments, the results were not as good as for the linear (affine, rotation, permutation) transformation classes.\\n\\nWe fixed the missing table reference in the revised version of the submission.\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Review: The paper proposes a technique for anomaly detection. It presents a novel method that unifies the current classification-based approaches to overcome generalization issues and outperforms the state of the art. This work also generalizes to non-image data by extending the transformation functions to include random affine transformations. A lot of important applications of anomaly detection are based on tabular data so this is significant. The \\u201cnormal\\u201d data is divided into M subspaces where there are M different transformations, the idea is to then learn a feature space using triplet loss that learns supervised clusters with low intra-class variation and high inter-class variation. A score is computed (using the probabilities based on the learnt feature space) on the test samples to obtain their degree of anomalousness. The intuition behind this self-supervised approach is that learning to discriminate between many types of geometric transformations applied to normal images can help to learn cues useful for detecting novelties.\", \"pros\": [\"There is an exhaustive evaluation and comparison across different types of data with the existing methods along with the SOTA.\", \"It is interesting to see how random transformations indeed helped to achieve adversarial robustness.\", \"The method is generalized to work on any type of data with arbitrary number of random tasks. It can even be used in a linear setting if needed for small datasets.\"], \"cons\": \"- While I liked that an analysis was done to see the robustness of the method on the contaminated data, I would be interested to see a more rigorous comparison in this fully unsupervised setting. \\n\\n\\nComments/Question:\\nDoes the selection of the transformation types affect the method performance at all? \\n\\nIn the Results section on Page 7, there are a couple of \\u201c??\\u201d instead of table numbers.\"}",
"{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel approach to classification-based anomaly detection for general data. Classification-based anomaly detection uses auxiliary tasks (transformations) to train a model to extract useful features from the data. This approach is well-known in image data, where auxiliary tasks such as classification of rotated or flipped images have been demonstrated to work effectively. The paper generalizes to the task by using the affine transformation y = Wx+b. A novel distance-based classification is also devised to learn the model in such as way that it generalizes to unseen data. This is achieved by modeling the each auxiliary task subspace by a sphere and by using the distance to the center for the calculation of the loss function. The anomaly score then becomes the product of the probabilities that the transformed samples are in their respective subspaces. The paper provides comparison to SOT methods for both Cifar10 and 4 non-image datasets. The proposed method substantially outperforms SOT on all datasets. A section is devoted to explore the benefits of this approach on adversarial attacks using PGD. It is shown that random transformations (implemented with the affine transformation and a random matrix) do increase the robustness of the models by 50%. Another section is devoted to studying the effect of contamination (anomaly data in the training set). The approach is shown to degrade more gracefully than DAGMM on KDDCUP99. Finally, a section studies the effect of the number of tasks on the performance, showing that after a certain number of task (which is probably problem-dependent), the accuracy stabilizes.\", \"pros\": [\"A general and novel approach to anomaly detection with SOT results.\", \"The method allows for any type of classifier to be used. The authors note that deep models perform well on the large datasets (KDDCUP) while shallower models are sufficient for smaller datasets.\", \"The paper is relatively well written and easy to follow, the math is clearly laid out.\"], \"cons\": [\"The lack of a pseudo-code algorithm makes it hard to understand and reproduce the method\", \"Figure 1 (left) has inverted colors (DAGMM should be blue - higher error).\", \"Figure 1 (right) - it is unclear what the scale of the x-axis is since there is only 1 label. Also the tick marks seem spaced logarithmically, which, if i understand correctly, is wrong.\", \"The paragraph \\\"Number of operations\\\" should be renamed \\\"Number of tasks\\\" to be consistent. Also the sentence \\\"From 16 ...\\\" should be clarified, as it seems to contrast accuracy and results, which are the same entity. The concept of 'stability of results' is not explained clearly. It would suffice to say: 'From 16 tasks and larger, the accuracy remains stable'.\", \"In section 6, the paragraph \\\"Generating many tasks\\\" should be named \\\"Number of tasks\\\", to be consistent with the corresponding paragraph in section 5.2. Also the first sentence should be: \\\"As illustrated in Figure 1 (right), increasing the number of tasks does result in improved performance but the trend is not linear and beyond a certain threshold, no improvements are made. And again the concept of 'stability' is somewhat misleading here. The sentence '...it mainly improves the stability of the results' is wrong. The stability is not improved, it is just that the performance trend is stable.\", \"The study on the number of tasks should be carried on several datasets. Only one dataset is too few to make any claims on the accuracy trends as the number of task is increased.\", \"The authors should coin an acronym to name their methods.\", \"Overall this paper provides a novel approach to classification-based semi-supervised anomaly detection of general data. The results are very encouraging, beating SOT methods by a good margin on standard benchmarks.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"UPDATE:\\nI acknowledge that I\\u2018ve read the author responses as well as the other reviews.\\n\\nI appreciate the clarifications, additional experiments, and overall improvements made to the paper. I updated my score to 6 Weak Accept. \\n\\n\\n####################\\n\\nThis paper proposes a deep method for anomaly detection (AD) that unifies recent deep one-class classification [6] and transformation-based classification [3, 4] approaches. The proposed method transforms the data to $M$ subspaces via $M$ random affine transformations and identifies with each such transformation a cluster centered around some centroid (set as the mean of the respectively transformed samples). The training objective of the method is defined by the triplet loss [5] which learns to separate the subspaces via maximizing the inter-class as well as minimizing the intra-class variation. The anomaly score for a sample is finally given by the sum of log-probabilities, where each transformation-/cluster-probability is derived from the distance to the cluster center. Using random affine transformations, the proposed method is applicable to general data types in contrast to previous works that only consider geometric transformations (rotation, translation, etc.) on image data [3, 4]. The paper conclusively presents experiments on CIFAR-10 and four tabular datasets (Arrhythmia, Thyroid, KDD, KDD-Rev) that indicate a superior detection performance of the proposed method over baselines and deep competitors.\", \"i_think_this_paper_is_not_yet_ready_for_acceptance_due_to_the_following_main_reason\": \"(i) The experimental evaluation needs clarification and should be extended to judge the significance of the empirical results.\\n\\n(i) I think the comparison with state-of-the-art deep competitors [6, 4] should consider at least another image dataset besides CIFAR-10, e.g. Fashion-MNIST or the recently published MVTec [1] for AD. On CIFAR-10, do you also consider geometric transformations however using your triplet loss or are the reported results from random affine transformations? I think reporting both would be insightful to see the difference between image-specific and random affine transformations.\\nOn the tabular datasets, how do deep networks perform in contrast to the final linear classifier reported on most datasets? Especially when only using a final linear classifier, the proposed method is very similar to ensemble learning on random subspace projections. Figure 1 (right) shows an error curve that is also typical for ensemble learning (decrease in mean error and reduction in overall variance). I think this should be discussed and ensemble baselines [2] should be considered for a fair comparison. Table 2 also seems incomplete with the variances missing for some methods?\\nFurther clarifications are needed. How many transformations $M$ do you consider on the specific datasets? How is hyperparameter $s$ chosen?\\nFinally, I think the claim that the approach is robust against training data contamination is too early from only comparing against the DAGMM method on KDDCUP (Is Figure 1 (left) wrong labeled? As presented DAGMM shows a lower classification error).\\n\\nOverall, I think the paper proposes an interesting unification and generalization of existing state-of-the-art approaches [6, 4], but I think the experimental evaluation needs to be more extensive and clarified to judge the potential significance of the results. The presentation of the paper also needs some polishing as there are many typos and grammatical errors in the current manuscript (see comments below).\\n\\n\\n####################\\n*Additional Feedback*\\n\\n*Positive Highlights*\\n1. Well motivated anomaly detection approach that unifies existing state-of-the-art deep one-class classification [6] and transformation-based classification [3, 4] approaches that indicates improved detection performance and is applicable to general types of data.\\n2. The work is well placed in the literature. All relevant and recent related work is included in my view.\\n\\n*Ideas for Improvement*\\n3. Extend and clarify the experimental evaluation as discussed in (i) to infer statistical significance of the results.\\n4. I think many details from the experimental section could be moved to the Appendix leaving space for the additional experiments.\\n5. Maybe add some additional tabular datasets as presented in [2, 7].\\n6. Maybe clarify \\u201cClassification-based AD\\u201d vs. \\u201cSelf-Supervised AD\\u201d a bit more since unfamiliar readers might be confused with supervised classification.\\n7. Improve the presentation of the paper (fix typos and grammatical errors, improve legibility of plots)\\n8. Some practical guidance on how to choose hyperparameter $s$ would be good. This may just be a default parameter recommendation and showing that the method is robust to changes in s with a small sensitivity analysis.\\n\\n*Minor comments*\\n9. The set difference is denoted with a backslash not a forward slash, e.g. $R^L \\\\setminus X$.\\n10. citet vs citep typos in the text (e.g. Section 1.1, first paragraph \\u201c ... Sakurada & Yairi (2014); ...\\u201d)\\n11. Section 1.1: \\u201cADGMM introduced by Zong et al. (2018) ...\\u201d \\u00bb \\u201cDAGMM introduced by Zong et al. (2018) ...\\u201d.\\n12. Eq. (1): $T(x, \\\\tilde{m})$ in the first denominator as well.\\n13. Section 2, 4th paragraph: $T(x, \\\\tilde{m}) \\\\in R^L \\\\setminus X_{\\\\tilde{m}}$.\\n14. $m$, $\\\\tilde{m}$, and $m'$ are used somewhat inconsistently in the text.\\n15. Section 3: \\u201cNote, that it is defined everywhere.\\u201d?\\n16. Section 4: \\\"If $T$ is chosen deterministicaly ...\\\" >> \\\"If $T$ is chosen deterministically ...\\\"\\n17. Section 5, first sentence: \\u201c... to validate the effectiveness our distance-based approach ...\\u201d \\u00bb \\u201c... to validate the effectiveness of our distance-based approach ...\\u201d.\\n18. Section 5.1: \\u201cWe use the same same architecture and parameter choices of Golan & El-Yaniv (2018) ...\\u201d \\u00bb \\u201cWe use the same architecture and parameter choices as Golan & El-Yaniv (2018) ...\\u201d\\n19. Section 5.2: \\u201cFollowing the evaluation protocol of Zong et al. Zong et al. (2018) ...\\u201d \\u00bb \\u201cFollowing the evaluation protocol of Zong et al. (2018) ...\\u201d.\\n20. Section 5.2: \\u201cThyroid is a small dataset, with a low anomally to normal ratio ...\\u201d \\u00bb \\u201cThyroid is a small dataset, with a low anomaly to normal ratio ...\\u201d.\\n21. Section 5.2, KDDCUP99 paragraph: \\u201cTab. ??\\u201d reference error.\\n22. Section 5.2, KDD-Rev paragraph: \\u201cTab. ??\\u201d reference error.\\n\\n\\n####################\\n*References*\\n\\n[1] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger. Mvtec ad\\u2013a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9592\\u20139600, 2019.\\n[2] J. Chen, S. Sathe, C. Aggarwal, and D. Turaga. Outlier detection with autoencoder ensembles. In SDM, pages 90\\u201398, 2017.\\n[3] S. Gidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In ICLR, 2018.\\n[4] I. Golan and R. El-Yaniv. Deep anomaly detection using geometric transformations. In NIPS, 2018.\\n[5] X. He, Y. Zhou, Z. Zhou, S. Bai, and X. Bai. Triplet-center loss for multi-view 3d object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1945\\u20131954, 2018.\\n[6] L. Ruff, R. A. Vandermeulen, N. G\\u00f6rnitz, L. Deecke, S. A. Siddiqui, A. Binder, E. M\\u00fcller, and M. Kloft. Deep one-class classification. In International Conference on Machine Learning, pages 4393\\u20134402, 2018.\\n[7] L. Ruff, R. A. Vandermeulen, N. G\\u00f6rnitz, A. Binder, E. M\\u00fcller, K.-R. M\\u00fcller, and M. Kloft. Deep semi-supervised anomaly detection. arXiv preprint arXiv:1906.02694, 2019.\"}"
]
} |
SJeuueSYDH | Distributed Training Across the World | [
"Ligeng Zhu",
"Yao Lu",
"Yujun Lin",
"Song Han"
] | Traditional synchronous distributed training is performed inside a cluster, since it requires high bandwidth and low latency network (e.g. 25Gb Ethernet or Infini-band). However, in many application scenarios, training data are often distributed across many geographic locations, where physical distance is long and latency is high. Traditional synchronous distributed training cannot scale well under such limited network conditions. In this work, we aim to scale distributed learning un-der high-latency network. To achieve this, we propose delayed and temporally sparse (DTS) update that enables synchronous training to tolerate extreme network conditions without compromising accuracy. We benchmark our algorithms on servers deployed across three continents in the world: London (Europe), Tokyo(Asia), Oregon (North America) and Ohio (North America). Under such challenging settings, DTS achieves90×speedup over traditional methods without loss of accuracy on ImageNet. | [
"Distributed Training",
"Bandwidth"
] | Reject | https://openreview.net/pdf?id=SJeuueSYDH | https://openreview.net/forum?id=SJeuueSYDH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"YIcgBqa2U",
"BkgiDncnjr",
"HJgi2xt3sB",
"BJxcVFy2oS",
"BJlVOGFcjS",
"BJxCgGFcjH",
"ryxOXZKcsH",
"r1eT_7l55r",
"HJeMe67H5S",
"HyeMuX_J9H",
"rJlp29pRtS",
"SJxf5f3pDr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1576798748335,
1573854306905,
1573847218610,
1573808433805,
1573716587941,
1573716470129,
1573716256131,
1572631413196,
1572318441831,
1571943273838,
1571900084856,
1569731210436
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2404/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2404/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2404/AnonReviewer3"
],
[
"~Boris_Ginsburg1"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The paper introduces a distributed algorithm for training deep nets in clusters with high-latency (i.e. very remote) nodes. While the motivation and clarity are the strengths of the paper, the reviewers have some concerns regarding novelty and insufficient theoretical analysis.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We sincerely thank all reviewers for their comments. All reviewers agree that the paper is clearly written and has certain contributions to against latency. R2 & R3 mainly concern about the convergence guarantees, which we have justified through proof in the appendix (guarantee to converge and no slower than SGD). For unclear parts in writing, we have revised accordingly in the updated version.\\n\\nWe have added experiments on NLP tasks and DTS well-preserves the accuracy while tolerating a high latency. Some experiments (e.g., transformer and BERT) takes a longer time than the rebuttal period. We will update these results later.\", \"our_code_is_available_on\": \"https://drive.google.com/drive/folders/1XdneMoRPNooN3-dj6yFwFcQRQO6OhLB1?usp=sharing\\n\\nIf there are any additional comments/suggestions on paper/code, please let us know!\"}",
"{\"title\": \"Re: Thank you for the detailed answer\", \"comment\": \"Thank you for your comment! We would try to address your concern on gradient coherence.\\n\\nFirst, let us consider the standard SGD algorithm (on a single machine). When we sample a mini-batch, there is no guarantee that the specific stochastic gradient on this specific mini-batch will point to the optimum. What we could guarantee is in expectation they are pointing to the optimum. \\n\\nNow let us consider the delay update with delay t. It is true that the iterates on each machine are different (i.e. not synchronized), but our update strategy guarantees that they are not far from each other. This is characterized by Lemma A.1 in the appendix, showing that the difference of local iterates are bounded by a constant proportional to t. This implies that the stale gradient does not deviate too much from the current gradients in expectation when t is reasonable (because of Lipschitz smoothness). \\n\\nWe hope this clarifies your concern. Finally, our method is 1) theoretically sound; 2) effective on tolerating high latency and achieving high scalability, which we believe has a lot of potential in large-scaled applications. We will appreciate if you could reconsider the score based on this contribution. Thank you!\"}",
"{\"title\": \"Thank you for the detailed answer\", \"comment\": \"Dear authors, I acknowledge your detailed answer and believe this paper do have some merits. Glad that the comment on t inspired that warm up strategy. The theoretical analysis answers some of my concerns, but I remain doubtful that it adresses the question of gradient coherence, this is a concern I have not with your paper alone but with a few other on asynchronous SGD: the hypothesis that current and stale gradients will somehow point to the same half-space and decrease the loss needs much more exploration than what the community did so far (assuming it holds almost for free). I would highly suggest you explore this in any future iteration of this (relevant) work.\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"Thanks for your comments! For the theoretical parts, we attach proof of convergence and the connection between our algorithm and variance reduction (please see Appendix updated PDF). We show that our proposed delayed update and temporally sparse update have the same convergence rate as original synchronous distributed SGD.\\n\\n>>> none convergence / convergence analysis / compared with SSGD\\n\\nBoth R2 and you have raised this concern. To address it, we have added a convergence analysis of our algorithm. Under the non-convex smooth optimization setting, we show that our algorithms enjoy the same convergence rate as SSGD. Moreover, we make a new connection of our methods to the well-known variance reduction technique [4]. We believe that this connection provides intuition and insights on the effectiveness of our algorithm. All the details are included in appendix.\\n\\nThe convergence rate of our method is given by\\n\\n$$ O(\\\\frac{1}{\\\\sqrt{NJ}}) + O{\\\\frac{t^2J}{N}} $$ \\n\\nwhere $t$ is the delay parameter, $N$ is the number of iterations, $J$ is the number of workers. When $t$ is reasonable ( $t < O(N^{\\\\frac{1}{4}} J^{-\\\\frac{3}{4}})$ ), the first term dominates and the convergence rate is identical to the convergence rate of SSGD. \\n\\n>>> delaying the updates opens up for the same problems as asynchronous SGD (ASGD)\\n\\nThe biggest known problem for ASGD is the stale gradients, though it is also where the scalability benefits from. The main challenge is to reduce the noise brought by staleness. There are studies on (1) dampening the stale gradients [1] (2) correct the gradients [2] (3) limit the maximum staleness [3]. In order to deal with the stale gradient, our work incorporates the well-known variance reduction technique, which naturally reduces the noise. Our theoretical analysis justifies the soundness of the proposed strategy.\\n\\nMoreover, the latency grows, ASGD suffers from performance degrade because of increasing staleness of gradients. On ImageNet, \\n The strength of our method is to allow high scalability with negligible loss on performance.\\n\\n>>> Using only one dataset/network to evaluate the approach is too little\\n\\nOn one hand, ImageNet is a large scale dataset and ResNet is a widely adapted models in both industry and research. We believe the improvements on ResNet / ImageNet will consistently translate into other models and tasks. \\n\\nOn the other hand, we have performed additional experiments on NLP tasks. We experiment DTS on 2 layer LSTM on the Penn Treebank corpus (PTB) dataset. The results are attached below. DTS well-preserves the accuracy up to 20 delay steps and temporal sparsity of 1/20.\\n\\n\\n Perplexity | Perplexity |\\nOriginal 72.30 | Original 72.30 |\\nDelay(t)=4 72.28 | Temporal Sparse (p)=4 72.27 |\\nDelay(t)=10 72.29 | Temporal Sparse (p)=10 72.31 |\\nDelay(t)=20 72.27 | Temporal Sparse (p)=20 72.28 |\\n\\n>>> Definition of scalability\\n\\nThe idea case in distributed training is that N machines bring N times speedup. However, this is usually not achievable in practice because of communication costs (latency and bandwidth). If a system can achieve M times speedup on N machines, the scalability is M / N, which demonstrates how scalable the system is. We will clarify this point in the revision.\\n\\n>>> Related work / Novelty\\n\\nWe agree with the reviewer that sparse communication has been studied previously and our temporal sparsity variant is similar to them as a strategy to amortize latency. We will add more related works including [5, 6, 7]. Please kindly advise if we miss any. However, to tolerate the latency, we have proposed a variant with delayed update, which is completely novel and new, according to the best of our knowledge. \\n\\nExisting methods with sparse communication do not scale well when latency grows since they only amortize it. Conventional ASGD based approaches, though scales well with growth of latency, suffer from performance drop because of increasing staleness. DTS is the first work to both maintain good scalability (0.72) and achieve promising results (no drop on accuracy) on modern models (ResNet-50) and large scale dataset (ImageNet).\\n\\n[1] Communication compression for decentralized training.\\n[2] Staleness-aware async-sgd for distributed deep learning.\\n[3] More effective distributed ml via a stale synchronous parallel parameter server.\\n[4] Accelerating stochastic gradient descent using predictive variance reduction.\\n[5] Communication-efficient learning of deep networks from decentralized data.\\\" \\n[6] Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning.\\n[7] Deep learning with elastic averaging SGD.\"}",
"{\"title\": \"Response to reviewer #2\", \"comment\": \"Thanks for your detailed and constructive comments! For the theoretical parts, we attach proof of convergence and the connection between our algorithm and variance reduction (please see Appendix updated PDF). We show that our proposed delayed update and temporally sparse update have the same convergence rate as original synchronous distributed SGD.\\n\\n>>> Experimental settings\", \"we_are_using_homogeneous_infrastructure_for_training\": \"All servers are communicated through allreduce and there is no central parameter server. We will make it clear in revision.\\n\\n>>> Theoretical soundness \\n\\nBoth R3 and you have raised this concern. To address it, we have added a convergence analysis of our algorithm. Under the non-convex smooth optimization setting, we show that our algorithms enjoy the same convergence rate as SSGD. Moreover, we make a new connection of our methods to the well-known variance reduction technique [4]. We believe that this connection provides intuition and insights on the effectiveness of our algorithm. All the details are included in appendix.\\n\\n>>> Limitation on value of t / Arbitrarily old gradients when close to convergence \\n\\nThe convergence rate of our method is given by\\n\\n$$ O(\\\\frac{1}{\\\\sqrt{NJ}}) + O{\\\\frac{t^2J}{N}} $$ ,\\n\\nwhere $t$ is the delay parameter, $N$ is the number of iterations, $J$ is the number of workers. When $t$ is in a reasonable range ( $t < O(N^{\\\\frac{1}{4}} J^{-\\\\frac{3}{4}})$ ), the first term dominates and the convergence rate is identical to the convergence rate of SSGD. \\n\\nWhen $t=0$, there is no staleness, the update is identical to SSGD. \\nWhen $t=N$, the algorithm becomes identical as local SGD and does not converge.\\n\\nAs shown in the convergence analysis, $t$ has to be a reasonable value and workers cannot take any arbitrary old gradient. However, $t$ can be set to larger when training close to convergence (the first term in convergence rate dominates). Your observation inspires us to design a warm-up strategy: $t$ is initially set to a small value and grows during optimization. With this technique, we can double the maximum staleness on CIFAR (no loss on accuracy). \\n\\n>>> Intuitions / Why no dampening / Difference between ASGD\\n\\nThe biggest known problem for ASGD is the stale gradients, though it is also where the scalability benefits from. The main challenge is to reduce the noise brought by staleness. There are studies on (1) dampening the stale gradients [1] (2) correct the gradients [2] (3) limit the maximum staleness [3]. In order to deal with the stale gradient, our work incorporates the well-known variance reduction technique, which naturally reduces the noise. Our theoretical analysis justifies the soundness of the proposed strategy.\\n\\nOne thing worth noting is that though the scalability of ASGD variants does not vary much with the growth of latency, the accuracy will drop because increasing staleness, which is not we want. That is why we mainly compare with SSGD in our initial submission. The strength of our method is to allow high scalability with a negligible loss of accuracy. \\n\\nWe agree with the reviewer that we should have compared to ASGD and it is an omission from our part. We now have added comparisons to vanilla ASGD and ECD-PSGD (an improved version of ASGD) in Fig 1. When training across the world, the performance of ECD-PSGD drops quickly where DTS has no drop on accuracy while maintaining good scalability.\\n\\n>>> Sparse Communication \\n\\nDistributed training has been focusing on training inside a cluster where latency is also promised to be low. Most previous works consider more about bandwidth rather than latency. Though conventional approaches like sparse communication can be also applied to amortize latency, it cannot fully cover the communication by computation especially when the latency is high (e.g., federated learning).\\n\\nWe agree with the reviewer that sparse communication has been studied previously and our temporal sparsity variant is similar to them. However, the variant on the delayed update is completely novel, according to the best of our knowledge.\\n\\nMore importantly, as shown in Fig.4, delayed update is more effective than sparse communication (temporal sparse update) when against latency. It is worth noting that delayed update is NOT a sparse communication: information are sent received at every iteration just as SSGD. The main difference is that the received information is processed in a delayed manner. While sparse communication can only amortize the latency, our delayed technique can fully cover the cost of communication by computation.\\n\\n[1] Tang, Hanlin, et al. \\\"Communication compression for decentralized training.\\\"\\n[2] Zhang, Wei, et al. \\\"Staleness-aware async-sgd for distributed deep learning.\\\" \\n[3] Ho, Qirong, et al. \\\"More effective distributed ml via a stale synchronous parallel parameter server.\\\"\\n[4] Johnson, Rie, and Tong Zhang. \\\"Accelerating stochastic gradient descent using predictive variance reduction.\\\"\"}",
"{\"title\": \"Response to reviewer #1\", \"comment\": \"Thanks for your reviewer. Please see point-to-point response below:\\n\\n>>> The definition of scalability\\n\\nThe ideal case in distributed training is that N machines bring N times speedup. However, this is usually not achievable in practice because of communication costs (latency and bandwidth). If a system can achieve M times speedup on N machines, the scalability is M / N, which demonstrates how scalable the system is. We will make this point clear in the revision.\\n\\n>>> Evaluation only for ResNet-50 on ImageNet\\n\\nWe agree that ResNet-50 is a more compute-intensive model. However, as shown in Fig1, 4.a, and 4.b, even for models with high compute/network ratio, the training efficiency of conventional approaches are still seriously affected by latency. It could be only worse when dealing with communication-intensive models.\\n\\nWe have added more experiments on NLP tasks as suggested by the reviewer. We experimented DTS on 2 layer LSTM on the Penn Treebank corpus (PTB) dataset. The results are attached below. DTS well-preserves the accuracy up to 20 delay steps and temporal sparsity of 1/20.\\n\\n\\n Perplexity | Perplexity |\\nOriginal 72.30 | Original 72.30 |\\nDelay(t)=4 72.28 | Temporal Sparse (p)=4 72.27 |\\nDelay(t)=10 72.29 | Temporal Sparse (p)=10 72.31 |\\nDelay(t)=20 72.27 | Temporal Sparse (p)=20 72.28 |\\n\\n>>> Experiment settings \\n\\nApology for the confusion. We experiment on 16 eight-card GPU servers and obtain scalability of 0.008. We intend to emphasize how poor the scalability of SSGD is, that even with 100 workers (assume the same scalability 0.008), it is still slower than a single machine. We will clarify this point.\\n\\n>>> Further comments\\n\\nWe have added convergence analysis of our algorithm in the appendix. This justifies the theoretical soundness of our algorithm. Moreover, we bridge a new connection between our delayed variant and the well-known variance reduction technique in convex optimization, we believe this provides more intuition and insight on the empirical effectiveness of our method.\"}",
"{\"title\": \"Thanks for your interest!\", \"comment\": \"1. Yes. It is required to store previous gradients for calculating error compensation. Notably it just stores locally and does not bring any extra cost to the communication.\\n2. We use \\u201cw\\u2019 = w + lambda * grad\\u201d as the update formula. We will rewrite the equations to be consistent with the mainstream. Apologize for the confusion.\\n3. Yes, it works for NLP task. We have experimented DTS on 2 layer LSTM on the Penn Treebank corpus (PTB) dataset. The results are attached below. DTS well-preserves the accuracy up to 20 delay steps and temporal sparsity of 1/20.\\n\\n\\n Perplexity | Perplexity |\\nOriginal 72.30 | Original 72.30 |\\nDelay(t)=4 72.28 | Temporal Sparse (p)=4 72.27 |\\nDelay(t)=10 72.29 | Temporal Sparse (p)=10 72.31 |\\nDelay(t)=20 72.27 | Temporal Sparse (p)=20 72.28 |\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors provide a technique to compensate for error introduced by stale or infrequent synchronization updates in distributed deep learning.\\n\\nTheir solution is to use local gradient information to update the model, and once delayed gradient information from other workers arrives, the use it to provide a correction which would give an equivalent result to \\\"no delay\\\" synchronization in the case of linear model training.\", \"the_three_approaches_are\": \"1. Delayed update -- use local gradients for immediate update, apply correction when stale averaged update arrives. \\n2. Sparse update -- only update once every p iterations, the averaged update includes p steps\\n3. Combined -- 1. and 2. combined\\n\\nAuthors evaluate on ImageNet, showing some improvement, and promise to release their implementation for PyTorch/Horovod. Their technique, combined with a reference implementation in a popular framework stands a good chance of having impact. Given the increase in cloud training workloads, even a small improvement in this setting is significant.\", \"comments\": [\"\\\"scalability\\\" is never defined. I would recommend defining it, or referencing a paper which defines it. I assume it refers to training throughput divided by ideal training throughput.\", \"Evaluation is one on resnet-50. Because it's mostly convolutions, such network has high compute/network ratio and is not frequently bottlenecked by network. A more convincing experiment would rely on lower computation intensity architecture such as Transformer/BERT training.\", \"Section 1 states that without their technique, they expect SGD to exhibit 0.008 scalability for 100 servers, compared to 0.72 for their method. However, the number 0.72 was not supported by data, their largest experiment used 16 servers.\"]}",
"{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper tackles the issue of scalability in distributed SGD over a high-latency network.\\nAlong with experiments, this paper contributes two ideas: delayed updates and temporally sparse updates.\\n\\n- strengths: \\n\\n\\u00b7 Clear-looking overall presentation.\\n Good efforts to explain the network problems (latency, congestion) and how they are tackled by delayed updates and temporally sparse updates.\\n\\n- weaknesses:\\n\\n\\u00b7 While the overall presentation looks clean, the paper does not talk about the setting (one parameter server and many workers, only workers, etc).\\n This is arguably basic information, and the readers is left to understand by themselves that the setting consists in many workers without a parameter server.\\n\\n\\u00b7 There is no theoretical analysis of the soundness of the proposed algorithm.\\n For instance in ASGD (that the authors cite), stale gradients are dampened before being used, which in turn is used to guarantee convergence.\\n In the proposed algorithm, there is no dampening nor apparent limit of the maximum value of t; such a difference with prior art should entail a serious (theoretical) analysis.\\n\\n\\u00b7 Finally, using SSGD for comparison is not very \\\"fair\\\", as communication-efficient algorithms have already been published for quite some time [1, 2, and follow-ups (e.g. searching for \\\"local sgd\\\")].\\n At the very least a comparison with ASGD (cited) is necessary, as in a realistic setting latency is indeed a problem but arguably not bandwidth\\n (plus, orthogonal gradient compression techniques do exist, e.g. as in \\\"Federated Learning: Strategies for Improving Communication Efficiency\\\").\", \"questions_to_the_authors\": \"- Can you clarify the setting?\\n\\n- Can you give at least an intuition why accepting stale gradient is correct (i.e. does not impede convergence)?\\n There is no theoretical limit on the value of t; can workers take any arbitrary old gradient?\\n So when the training is close to convergence (if it reaches it), i.e., when the norm of the gradients are close to 0, the algorithm could then use very old gradients (which norms could be orders of magnitude larger) to \\\"correct the mismatch\\\" caused by the local training; is this correct?\\n To solve that issue, ASGD introduces a simple dampening mechanism (which is necessary in the convergence proof), which your algorithm does not have.\\n\\n\\nThe related work section focuses on gradient compression techniques (which tackle low bandwidth, not latency) and asynchronous SGD (which is more prone to congestion, with a single parameter server),\\nbut seems to overlook that sparse communication techniques already exist (this fact should at least be mentioned).\\n\\nThe idea of sparse communication for SGD exists since at least 2012 [1, Algorithm 3].\\nA first mention of the use of such techniques for \\\"communication efficiency\\\" dates from (at least) 2015 [2].\\n\\n[1] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao.\\n Optimal distributed online prediction using mini-batches.\\n J. Mach. Learn. Res., 13(1):165\\u2013202, January 2012.\\n\\n[2] Sixin Zhang, Anna E. Choromanska, Yann LeCun.\\n Deep learning with Elastic Averaging SGD.\\n NeurIPS, 2015.\\n\\n\\nI do not see a clear novelty, nor a proof (or even intuitions) that the proposed algorithm is theoretically sound.\\nThe comparison with SSGD is arguably unfair, since SSGD is arguably not at all the state-of-the-art in the proposed setting (hence the claimed speedup of x90 can be very misleading).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents an approach - Delayed and Temporally Sparse Update (DTS) - to do distributed training on nodes that are very distant from each other in terms of latency. The approach relies on delayed updates to tolerate latencies and temporally sparse updates to reduce traffic. The approach is implemented in a synchronous stochastic gradient decent (SGD) scheme.\\n\\nThe paper's approach with delayed updates, i.e., delaying gradient updates to other nodes, sends the updates in a later iteration. In this way, very long latencies can be hidden (covered by computation) since the gradient updates can be postponed to an arbitrary iteration (barrier synchronization) in the future. \\n\\nUnfortunately, delaying the updates opens up for the same problems as asynchronous SGD (ASGD), i.e., slower (none) convergence or staleness problems. These problems are coped with using a compensation factor, that adjusts the momentum and weights accordingly. \\n\\nA critical aspect of ASGD is to provide convergence guarantees, and DTS is similar in that aspect. However, the paper does not provide any convergence analysis or convergence guarantees. \\n\\nUsing only one dataset / network to evaluate the approach is too little. In order to show the generality, more benchmarks need to be evaluated.\\n\\nIn general, I like the paper. It is well written and easy to read. However, the novelty and contribution is relatively low. A lot of work has been done in the HPC and distributed systems communities over decades on how to tolerate latencies, trade-offs between communication and computation, proper synchronization points, etc. The ideas presented in this paper are well-known and extensively applied in those communities. None of that work is referensed or acknowledged in this paper. \\n\\nThe term \\\"scalability\\\" is used as an evaluation metric, but never clearly defined in the paper.\"}",
"{\"comment\": \"Very interesting work!\", \"a_few_questions\": \"1. Does this method require to remember t(delay) previous gradients to compute compensation error?\\n2. It looks like there is typo in equations (1),(2)...: it should be \\\"w - lambda*grad\\\" instead of \\\"w + lambda*grad\\\"?\\n3. Have you tried to apply this method to other tasks (NLP, speech...)?\", \"title\": \"Very interesting work!\"}"
]
} |
Sye_OgHFwH | Unrestricted Adversarial Examples via Semantic Manipulation | [
"Anand Bhattad",
"Min Jin Chong",
"Kaizhao Liang",
"Bo Li",
"D. A. Forsyth"
] | Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their $\mathcal{L}_p$ norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors - color and texture - in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. | [
"Adversarial Examples",
"Semantic Manipulation",
"Image Colorization",
"Texture Transfer"
] | Accept (Poster) | https://openreview.net/pdf?id=Sye_OgHFwH | https://openreview.net/forum?id=Sye_OgHFwH | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"6MynmiUkxQ",
"rkgkAfhijS",
"BJgw06isjH",
"B1xcI6ojsB",
"rklbmtp6YH",
"BJlg1WSTtS",
"HJgAOwI-FB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748306,
1573794502682,
1573793231172,
1573793106505,
1571834137214,
1571799256348,
1571018614238
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2403/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2403/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2403/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2403/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2403/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2403/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Accept (Poster)\", \"comment\": \"In this paper, the authors present adversarial attacks by semantic manipulations, i.e., manipulating specific detectors that result in imperceptible changes in the picture, such as changing texture and color, but without affecting their naturalness. Moreover, these tasks are done on two large scale datasets (ImageNet and MSCOCO) and two visual tasks (classification and captioning). Finally, they also test their adversarial examples against a couple of defense mechanisms and how their transferability. Overall, all reviewers agreed this is an interesting work and well executed, complete with experiments and analyses. I agree with the reviewers in the assessment. I think this is an interesting study that moves us beyond restricted pixel perturbations and overall would be interesting to see what other detectors could be used to generate these type of semantic manipulations. I recommend acceptance of this paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your comments and interest in our work.\", \"q1\": \"Some readers may wonder how the \\\"averaged-case\\\" corruption robustness behaves for both cAdv and sAdv, e.g. considering random colorization. Would it be worse than the robustness of Gaussian noise?\", \"a1\": \"Thanks for the interesting question and it\\u2019s indeed reasonable to wonder if the classifier is robust against random colorization with the same level of corruption. As we know, classifiers are robust against small random corruptions, whereas large random corruptions might be able to change the classification results but not certainly. Furthermore, large/unbounded random corruptions, such as random colorization as mentioned, cannot generate semantically aligned images. Therefore, they can be easily spotted by humans as adversarial/abnormal. Our methods are exactly trying to resolve these problems by hiding unbounded adversarial patterns in plain sight.\\n\\nIn particular, some classifiers are not robust towards large random colorizations as seen from the hue and saturation attack [3]. Given sufficiently large perturbation, it is not surprising to see the labels changing. However, hue and saturation attacks give unrealistic images and are not able to do targeted attacks (1.2% on ImageNet). cAdv, on the other hand, has high attack success rates for both targeted and untargeted attacks while staying perceptually realistic. We will clarify this in our revision.\\n\\n[3] Hosseini, Hossein, and Radha Poovendran. \\\"Semantic adversarial examples.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018.\", \"q2\": \"One of my concerns on tAdv is whether the texture added is indeed effective to reduce the accuracy, or it\\u2019s just from the (yet small) beta term in the objective. Adding an ablation of beta=0 case in the result would much help the understanding of the method.\", \"a2\": \"This is a really interesting and important question w.r.t tAdv. We conducted additional experiments suggested as below. We did an ablation with tAdv varying both beta-term and alpha-term in Table 4 of our Appendix. From our study, it can be inferred that increasing beta does not result in an increase in attack success rate; with a very small beta, the attack is highly successful. However, we did include a row when beta is zero in the submitted version. Here we include the numbers when beta = 0 for two cases when tadv is optimized using LBFGS for one iteration of 14 steps (small texture flow) and 3 iterations of 14 steps (large texture flow) as described in our paper. Note that making beta = 0 does not guarantee that we will be able to reach the target as it is difficult to control texture transfer and stop when the target is reached as small change in texture can make images classify to arbitrary classes. It is also to be noted that beta=0 would be a black box attack as the attack does not have any knowledge about the target classifier.\\n\\nOne iteration of LBFGS with 14 steps of texture transfer\\n+--------------------------------------+-----------+-----------+-----------+--------------+\\n| | a = 250 | a = 500 | a = 750 | a = 1000 |\\n+--------------------------------------+-----------+-----------+-----------+--------------+\\n| Untargeted Attack Success | 25% | 24.5% | 25% | 24.5% |\\n+--------------------------------------+-----------+-----------+-----------+--------------+\\n| Target Attack Success | 0.5% | 0.5% | 0.5% | 0.5% |\\n+--------------------------------------+-----------+-----------+-----------+--------------+\\n\\nThree iterations of LBFGS with 14 steps of texture transfer (large texture flow transfer)\\n+--------------------------------------+-----------+-----------+-----------+-------------+\\n| | a = 250 | a = 500 | a = 750 | a = 1000 |\\n+--------------------------------------+-----------+-----------+-----------+-------------+\\n| Untargeted Attack Success | 37.5% | 40.5% | 41.5% | 42.5% |\\n+--------------------------------------+-----------+-----------+-----------+-------------+\\n| Target Attack Success | 0.5% | 0.0% | 0.0% | 0.0% |\\n+--------------------------------------+-----------+-----------+-----------+-------------+\\n\\nBoth our Table 4 and the above analysis guarantees that tAdv is indeed effective in improving the attack success rate. We will include this analysis in our paper\\u2019s supplementary.\", \"q3\": \"I think F should denote the classifier to attack, but the description tells it's the colorization network. As it seems to me that theta is nevertheless for the colorization network, I feel the notation should be refined for better understanding to the readers.\", \"a3\": \"Yes, that\\u2019s a typo and you are correct that F is indeed a classifier to attack. We have fixed this in the revision. Sorry about that and thanks for bringing this to our attention.\"}",
"{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thanks for the interesting question and suggestions.\", \"q1\": \"A somewhat weakness is that the discriminator - a pre-trained ResNet 50 - is too weak for this scenario. What about a ResNet 50 trained on augmented datasets with color jittering? What about finetuned ResNet 50 with taking color channel as explicit input, since the attack uses this additional info.\", \"a1\": \"We conducted extra two sets of experiments as you described. First, we finetune a ResNet50 model for 35 epochs using:\\n\\na) data augmented with brightness, contrast, hue, and saturation jittering\", \"top1_accuracy\": \"76.206\\n\\nWe then apply cAdv to both models. We randomly select 60 images for 3 different classes and use targeted cAdv to attack the two models to two random different classes. The final accuracies of the two models on the cAdv images is as follows\\n\\nTop1 accuracy for a): 0.000%\\nTop1 accuracy for b): 0.000%\\n\\nEven when trained with data augmentation or taking in LAB colorspace, cAdv is able to easily attack the two models.\", \"q2\": \"As tAdv attack seems to manipulate the high-frequency texture of images, how about applying a Gaussian filter on the images and feed into the discriminator again? Is that attack still effective or not?\", \"a2\": \"Thanks again for the interesting question, and on the suggested experiments regarding tAdv: In Table 2, under Feature Squeezing, we reported scores for two types of Gaussian Filtering -- 2x2, 3x3. tAdv outperforms other baselines significantly and is a stronger attack on Gaussian Filtering. Our intuition for why tAdv is stronger despite having high-frequency perturbations is because of their structured pattern and are not local like most of the recent other attacks and are therefore able to pass through filtering operations. We will make this clear in our Table as well as the caption in our revision and sorry for the confusion.\"}",
"{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your comments and interest in our work.\", \"q1\": \"The paper overall is well-written and easy to follow. But I think the part of attacking for captioning is a bit distracted and there is no comparison with others on this task. I expect existing attacks on pixels can also do this task.\", \"a1\": \"We agree the captioning section was oddly arranged in our current submission. We will move the section towards the end of the paper to show it as another attack application.\\nThere are some pixel-level attacks against image captioning models [1,2] , but these captioning models are different so it is hard to directly compare. We will add corresponding discussion in our related work.\\n\\n\\n[1] Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, and Wei Liu. Exact adversarial\\nattack to image captioning via structured output learning with latent variables. In Proceedings of the IEEE\\nConference on Computer Vision and Pattern Recognition, pp. 4135\\u20134144, 2019.\\n[2] Chen, H., Zhang, H., Chen, P.Y., Yi, J. and Hsieh, C.J., 2017. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces two new adversarial attacks: one is generating adversarial examples by colouring the original images and the other is by changing textures of the original images. Specifically, the former one minimises the cross-entropy between the output of the classifier and the target label with the network weights of a pre-trained colourisation network. While the latter minimises the cross-entropy as well as the loss that defines the texture differences.\\n\\nI think the general idea of going beyond perturbations of pixel values in this paper is interesting and the proposed approaches of attacking on colour and textures are intuitive and reasonable. The results seem to be promising with comprehensive experiments including whitebox attack, blackbox attack by transferring, and attacks on defences.\\n\\nThe paper overall is well-written and easy to follow. But I think the part of attacking for captioning is a bit distracted and there is no comparison with others on this task. I expect existing attacks on pixel can also do this task.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed to generate semantically meaningful adversarial examples in terms of color of texture. In order to make manipulated images photo-realistic, colors to be replaced are chosen by energy values, while textures are replaced with style-transfer technique.\\n\\nThe paper is written clearly and organized well to understand. The graphs and equations are properly shown. The idea of using color replacement and texture transfer is interesting and novel.\\n\\nA somewhat weakness is that the discriminator - a pretrained ResNet 50 - is too weak for this scenario. What about a ResNet 50 trained on augmented datasets with color jittering?\\nWhat about finetuned ResNet 50 with taking color channel as explicit input, since the attack uses this additional info.\\n\\nAs tAdv attack seems to manipulate high frequency texture of images, how about applying a Gaussian filter on the images and feed into the discrimimator again? Is that attack still effective or not?\"}",
"{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper proposes cAdv and sAdv, two new unrestricted adversarial attack methods that manipulates either color or texture of an image. To these end, the paper employes another parametrized colorization techniques (and texture transfer method) and proposes optimization objectives for finding adversarial examples with respect to each semantic technique. Experimental results show that the proposed methods are more robust on existing defense methods and more transferrable accross models. The paper also performs a user study to show that the generated examples are fairly imperceptible like the C&W attack.\", \"In overall, I agree that seeking a new way of attack is important, and the methods are clearly presented to claim a new message to the community: adversarial examples can be even found by exploiting semantic features that humans also utilize, since DNNs tend to overly-utilize them, e.g. colors. These claims are supported by the experiments showing that the generated examples are more transferrable across robust classifiers. Personally, I liked the idea of using another colorization method to design cAdv and the use of K-means clustering to control the imperceptibility.\", \"Some readers may wonder how the \\\"averaged-case\\\" corruption robustness behave for both cAdv and sAdv, e.g. considering random colorization. Would it be worse than the robustness on Gaussian noise?\", \"One of my concerns on tAdv is whether the texture added is indeed effective to reduce the accuracy, or its just from the (yet small) beta term in the objective. Adding an ablation of beta=0 case in the result would much help the understanding of the method.\", \"Eq 1: I think F should denote the classifier to attack, but the description tells it's the colorization network. As it seems to me that theta is nevertheless for the colorization network, I feel the notation should be refined for better understanding to the readers.\"]}"
]
} |
HJxDugSFDB | Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model | [
"Alex X. Lee",
"Anusha Nagabandi",
"Pieter Abbeel",
"Sergey Levine"
] | Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of model-based RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website. | [
"slac",
"stochastic latent",
"deep reinforcement",
"model",
"sample efficiency",
"algorithms",
"deep networks"
] | Reject | https://openreview.net/pdf?id=HJxDugSFDB | https://openreview.net/forum?id=HJxDugSFDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"HNF76UX_5a",
"HJe0GqrssH",
"rylNAPAqsH",
"SJg3cwC9sS",
"B1l4nYmjqH",
"SJe5StvycH",
"BJgddhA3YH"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748273,
1573767702373,
1573738444455,
1573738388181,
1572710827644,
1571940673680,
1571773551556
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2402/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2402/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2402/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2402/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2402/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2402/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"An actor-critic method is introduced that explicitly aims to learn a good representation using a stochastic latent variable model. There is disagreement among the reviewers regarding the significance of this paper. Two of the three reviewers argue that several strong claims made in the paper that are not properly backed up by evidence. In particular, it is not sufficiently clear to what degree the shown performance improvement is due to the stochastic nature of the model used, one of the key points of the paper. I recommend that the authors provide more empirical evidence to back up their claims and then resubmit.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Author Reply for Official Blind Review #2\", \"comment\": \"We thank the reviewer for the comments and feedback. We have revised the paper to address the points below.\\n\\n- '\\\"contrary to the conclusions in prior work (Hafner et al., 2019; Buesing et al., 2018), the fully stochastic model performs on par or better.\\\" Why?'\\nWe revised the results from Figure 6 and the text in Section 7.2 to adjust the claims about the importance of various design decisions (the fully and partially stochastic models both perform equally well). We hypothesize that these prior works benefit from the deterministic paths (realized as an LSTM or GRU) because they use multi-step samples from the prior. In contrast, our method uses samples from the posterior, which are conditioned on same-step observations, and thus they are less sensitive to the propagation of the latent states through time. We revised the text in Section 7.2. to include this reasoning.\\n\\n- \\\"explain the differences and understand the tradeoffs\\\" between PlaNet and SLAC.\\nThe similarity between PlaNet (and other model-based methods) and SLAC is that they both learn a latent variable model, while the main difference among them is whether the method is model-based or model-free. We revised the second paragraph of Section 2 to expand on the differences and tradeoffs between this type of methods and ours.\\n\\n- \\\"Figure 6 partially stochastic in figure, mixed in text.\\\"\\nWe revised the paper to consistently refer to this variant as \\\"partially stochastic\\\".\"}",
"{\"title\": \"Author Reply for Official Blind Review #3\", \"comment\": \"We thank the reviewer for the constructive feedback. In this response, we clarify the novelty of our method and the distinction with prior work, and we emphasize that the derivation of our method is novel and sound (we edited the paper to make this more clear). We also revised our paper to include additional references to prior work. Please see the answers below for clarifications.\\n\\n- \\\"method itself is incremental\\\"\\nTo the best of our knowledge, our work is the first model-free RL method for POMDPs that shows that the critic can directly be conditioned on individual latent states sampled from a stochastic model. This realization is non-trivial and novel, and our paper provides justification for it (we revised the paper to include a more detailed derivation in Appendix A of the revised paper). In contrast, prior model-free RL approaches convert the POMDP into an MDP by redefining the state space, and then performing RL in the converted MDP (e.g. in the space of learned belief representations or the history of observations and actions). An interesting finding of our derivation is that we can just sample z_t and z_{t+1} from the posterior and use those samples for the backup, instead of performing probabilistic filtering of the latent belief.\\n\\n- \\\"benefit of the method is rather from such particular latent space design rather than the stochastic vs deterministic\\\".\\nThe deterministic model has the same latent space factorization as our model, thus controlling for the latent space design. We revised the text in Section 7.2 to clarify this. The results from Figure 6 indicate that although the particular factorization provided benefits (fully/partially stochastic outperforms simple filtering), the stochasticity of the model also contributed to the benefits (fully/partially stochastic outperforms deterministic).\\n\\n- \\\"this work can be seen as complementary to many related works such as Igl 18\\\".\\nAs noted above regarding novelty, prior works perform RL on a belief representation, whereas our work shows that it is possible to train a critic directly on latent states. In the case of Igl 18, they use particle filtering to propagate the belief forward, and then encode the particles into a belief representation, which is then used for the actor and the critic. In addition, we focus on tasks with high-dimensional image observations for complex underlying continuous control tasks, in contrast to Igl 18, which evaluates on Atari tasks and low-dimensional continuous control tasks that emphasize partial observability and knowledge-gathering actions.\\n\\n- Related works.\\nWe updated Section 2 to include additional references of prior work that studies stochastic sequential models.\\n\\n- \\\"the experiments may be unfair, because, another partially stochastic method can easily utilize such design and further improve the performance\\\".\\nWe believe the comparison is fair since each algorithm uses a model that was chosen to work well with its particular algorithm, e.g. the PlaNet model was likely chosen for the quality of its future reward predictions, whereas our model was chosen for the quality of its representations.\\n\\n- Motivation of factorization of our latent variable.\\nWe draw motivation from recent success of autoregressive latent variables in VAEs (VQ-VAE2, Razavi et al. 2019; BIVA, Maaloe et al. 2019). This factorization results in latent distributions that are more expressive, and it allows for some parts of the prior and posterior distributions to be shared. We added this at the beginning of Section 6 of the revised paper.\"}",
"{\"title\": \"Author Reply for Official Blind Review #4\", \"comment\": \"We thank the reviewer for the detailed comments and feedback. In this response, we clarify the novelty of our method and the distinction with prior work, and we emphasize that the derivation of our method is novel and sound (we edited the paper to make this more clear). We also revised our paper to include additional references to prior work. Please see the answers below for clarifications.\\n\\n- Novelty.\\nTo the best of our knowledge, our work is the first model-free RL method for POMDPs that shows that the critic can directly be conditioned on individual latent states sampled from a stochastic model. This realization is non-trivial and novel, and our paper provides justification for it (see next paragraph). In contrast, prior model-free RL approaches convert the POMDP into an MDP by redefining the state space, and then performing RL in the converted MDP (e.g. in the space of learned belief representations or the history of observations and actions). An interesting finding of our derivation is that we can just sample z_t and z_{t+1} from the posterior and use those samples for the backup, instead of performing probabilistic filtering of the latent belief.\\n\\n- Learning the critic in the latent space \\\"without theoretical justifications\\\", \\\"lack of justifications of why equation 10 is even the right objective\\\".\\nWe revised the paper to emphasize that this choice follows from the ELBO (Section 5, first paragraph) and to include a more detailed derivation of the ELBO and the Bellman backup for POMDPs (Appendix A in the revised paper). The choice of learning the critic in the latent space follows from approximately maximizing the ELBO of the log-likelihood of the past observations and future optimality variables. The model loss (Eq. 9) corresponds to the first part of the ELBO and the policy loss (Eq. 11) corresponds to the second part of the ELBO. The Bellman residual (Eq. 10) is an approximation to the Bellman backups of the Q-function (Appendix A in the revised paper provides justification for the latent-space Bellman backup).\\n\\n- Related works.\\nWe updated Section 2 to include additional references of prior work that studies representation learning in RL.\\n\\n- \\\"include analysis of the proposed model with different latent variable models (including VAE)\\\".\\nWe revised the results from Figure 6, in which we compare our algorithm with various versions of the model (including VAE), and we revised the text in Section 7.2 to adjust the claims about the importance of various design decisions accordingly. We found that our fully stochastic model and the partially stochastic model both perform equally well. The performance was worse when using a model without the 2-level factorization, and significantly worse when using a standard per-time-step VAE with no temporal dependencies. The primary purpose of this ablation experiment is to analyze which components of SLAC\\u2019s latent variable model were important for its performance.\\n\\n- \\\"Are these all image based benchmarks too\\\" (for OpenAI gym tasks).\\nAll of these results are on image-based benchmarks unless it's labeled as \\\"state\\\" (the only one being \\\"SAC (state)\\\" in Figure 4). We revised Figure 4 and 5 with longer learning curves and with captions that specify that the experiments are from images (unless otherwise noted in Figure 4). We also now include the asymptotic performance of MPO, which is competitive to our approach for some tasks, though the sample efficiency is not.\\n\\n- \\\"Why are the baselines performing so poorly in the results?\\\" (for OpenAI gym tasks).\\nPrior papers do not report results for off-policy model-free RL algorithms from images on MuJoCo gym tasks, and very few prior works report image-based results on DeepMind control continuous tasks. We made our best effort to choose good hyperparameters for prior methods on these tasks, and we believe that these results reflect their actual performance. Image-based control on these tasks is quite difficult.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper proposes an actor-critic method that tries to aid learning good policies via learning a good representation of the state space (via a latent variable model). In actor-critic methods, the critic is learnt to evaluate policies in the latent space, which further helps with efficient policy optimization. The proposed method is evaluated on image-based control tasks, with baseline evaluations against both model-based and model-free methods in terms of sample efficiency.\", \"The key argument is that learning policies in the latent space is more efficient, as it is possible to learn good representations in the latent space. There are quite a few recent works (e.g DeepMDP, Gelada et al., 2019; Dadashi et al., 2019) that talks about representation learning in RL, and yet the paper makes no relations or references to previous works. I find it surprising that none of the past related works are mentioned in the paper.\", \"I find the arguments on solving a POMDP instead of a MDP a bit vague in this context. I understand that the goal is to solve image based control tasks - for which learning good representations via a latent variable model might be useful, but it does not explicitly require references to a POMDP? In most ALE tasks, we have pixel based observations too, which makes the ALE environments a POMDP in some sense, but we use approximations to it to make it equivalent to a MDP with sufficient history. The arguments on POMDP seems rather an additional mention, with no necessary significance to it?\", \"The paper mentions solving RL in the learned latent space, which is empirically proposed to be a good approach without theoretical justifications. There are several recent works that tries to understand the representation learning in RL problem from a theoretical perspective too - it would be useful to see where this approach stands in light of those theoretical results? Otherwise, the contribution seems rather limited : solving RL in latent space is useful, but there are no justifications to it? Why should this approach even be adapted or what is the significance of it?\", \"The proposed actor-critic method in the latent space is built on top of Soft Actor-Critic (SAC). I understand this is a design/implementation approach building from previous works - but it would have been useful to add more context as to what it means to learn a critic in the latent space. If the critic evaluates a policy in the latent space - then is this a good policy evaluation for actor-critic itself? Why or why not? I do not understand why the critic evaluation in the latent space is even a good approach?\", \"My first impression was that the paper proposes a separate auxilliary objective for learning good representations based on which actor-critic algorithms can be made more efficient. However, this does not seem to be the case directly? Following on previous point - I find the argument of solving a critic in the latent space rather vague.\", \"The sequential latent variable model proposed is based on existing literature. This can be any latent variable model (e.g VAEs), but I understand, as mentioned in the paper, the design choice of using sequential models to capture the temporal aspect.\", \"The proposed algorithm is in fact a combination of SAC and sequential latent variable models, both of which are well-known in the literature. The SLAC algorithm combines these to solve image-based control tasks. As per equation 10, which is the regular policy optimization objective with max entropy - the only difference is that the critic is evaluated in the latent space. This appears to me as more of an engineeering choice, and experimentally one that perhaps give good results - but the lack of justifications of why equation 10 is even the right objective to solve makes the paper rather less appealing.\", \"I think overall the contribution of the paper is rather limited. It is more of an experimental design and engineering approach that combines previous known techniques. The paper mentions learning good representations for RL, without any references or justifications - and it appears that overall there are bold claims made in the paper but it lacks significant scientific contribution.\", \"Experimental evaluations are made on image based control tasks. Experimental results are compared to few baselines - but it is not clear whether these are even the right baselines. For example, it would have been good to include analysis of the proposed model with different latent variable models (including VAE) to perhaps justify the choice of the latent variable model. Results in figure 5 appear a bit concerning to me - these are mostly the standard Mujoco tasks from the OpenAI suite. Are these all image based benchmarks too, or the standard baselines? It is not clear from the text. Assuming they are standard baselines, the comparisons made are rather unfair (for example : SAC and MDP performs much better on tasks like HalfCheetah-v2). Why are the baselines performing so poorly in the results?\", \"Overall, I think the paper needs more work in terms of writing and justifying the choice of the approach. There are significant references missing in the paper. Most importantly, there are quite a few claims made in the paper which are not properly justified, that makes the overall contribution and novelty of the paper rather limited. I would tend for a rejection of this paper, as it requires more work - both in terms of theoretical justifications (including references) and experimetnal ablation studies and more simpler benchmarks explaining the choice of the approach.\"]}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose SLAC, an important extension of the recently introduced soft actor critic (SAC) algorthm, which operates on a learned latent state, rather than an observed one, and therefore aims to jointly learn to represent high dimensional inputs and execute continuous control based on this representation.\\n\\nSLAC is based on a dynamic, non-linear Markov generative model and incorporates structured variational inference to learn the latent state distribution. The generative model and amortized variational inference support the learning of strong latent expected future reward estimates (Q functions that condition on the latent state), which the policy, which conditions directly on the observations (i.e. image) is distilled against for fast inference. The paper demonstrates solid gains over existing techniques, brings together recent work under a rigorous framework, and is a pleasure to read.\", \"strengths\": \"-Novel formulation, SOTA results, well written.\", \"limitations\": \"-While the most important ablation, the role of making the primary latent variable stochastic, is investigated, a deeper investigation of what makes the model more effective than existing techniques would be insightful, and further strengthen the paper.\\n-Related, the approach seems closest to PlaNet in structure, but rather than being used for planning, is executed directly as an off-policy actor-critic algorithm, generalizing SAC. A discussion, and possibly some additional experiments to explain the differences and understand the tradeoffs would strengthen the paper. The authors mention \\\", contrary to the conclusions in prior work (Hafner et al., 2019; Buesing et al., 2018), the fully stochastic model performs on par or better.\\\" Why?\", \"minor\": \"-Figure 6 partially stochastic in figure, mixed in text.\", \"overall\": \"A strong paper, that brings together and generalizes existing work, with strong experimentation and SOTA results. Definite accept.\"}",
"{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work proposed a fully stochastic RL method and demonstrated significantly improved performance on multiple tasks.\", \"pros\": \"1. The presentation is very clear and easy to read. \\n2. The proposed method is sensible \\n3, The experimental evaluation shows great practical gain\", \"cons\": \"1. The method itself is incremental. As discussed in the related work, this work can be seen as complementary to many related works such as Igl 18, but the novelty of the idea is rather limited. \\n2. The claims and the real-benefit of the method may not be consistent. (My biggest concern)\\nThe paper claims that full stochasticity contributed to the practical gain but in the experiment Figure 6, we can see the simple filtering does not perform well. \\nIt seems that the benefit of the method is rather from such particular latent space design rather than the stochastic vs deterministic. \\n3. Continue with the previous point, Figure 2 is not very well motivated and I believe that from Figure 1 to figure 2 design was the most important part of the performance gain. Such important designed was very briefly described without any motivation. \\n4. With the previous point, the experiments may be unfair, because, another partially stochastic method can easily utilize such design and further improve the performance. \\n5. The related work should add a discussion about stochastic sequential models such as Kalman VAE etc. paragraph 3 motivates your contribution as VAE does not model sequential information. But there are many works such as the KVAE that are stochastic and models sequential information.\"}"
]
} |
BJlPOlBKDB | Closed loop deep Bayesian inversion: Uncertainty driven acquisition for fast MRI | [
"Thomas Sanchez",
"Igor Krawczuk",
"Zhaodong Sun",
"Volkan Cevher"
] | This work proposes a closed-loop, uncertainty-driven adaptive sampling frame- work (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed-loop, we mean that our samples adapt in real- time to the incoming data. To our knowledge, we demonstrate the first genera- tive adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected MSE improvement for dif- ferent acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and SSIM on real undersampled MRI scans. | [
"Deep Bayesian Inversion",
"accelerated MRI",
"uncertainty quantification",
"sampling mask design"
] | Reject | https://openreview.net/pdf?id=BJlPOlBKDB | https://openreview.net/forum?id=BJlPOlBKDB | ICLR.cc/2020/Conference | 2020 | {
"note_id": [
"fUcThWQGyg",
"ryxU31W_sr",
"SygQN1buir",
"HkxxWCluoH",
"Hyx1naedjH",
"HklOragdjr",
"SJxUphe_oS",
"SJxsl2aa9r",
"BylJ7Uoh9r",
"HkxcSGnntB",
"rJx9Am1OYS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1576798748242,
1573552046008,
1573551915347,
1573551608459,
1573551526561,
1573551424189,
1573551293674,
1572883443312,
1572808214653,
1571762754270,
1571447761650
],
"note_signatures": [
[
"ICLR.cc/2020/Conference/Program_Chairs"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/Authors"
],
[
"ICLR.cc/2020/Conference/Paper2401/AnonReviewer4"
],
[
"ICLR.cc/2020/Conference/Paper2401/AnonReviewer2"
],
[
"ICLR.cc/2020/Conference/Paper2401/AnonReviewer1"
],
[
"ICLR.cc/2020/Conference/Paper2401/AnonReviewer3"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"comment\": \"The author responses and notes to the AC are acknowledged. A fourth review was requested because this seemed like a tricky paper to review, given both the technical contribution and the application area. Overall, the reviewers were all in agreement in terms of score that the paper was just below borderline for acceptance. They found that the methodology seemed sensible and the application potentially impactful. However, a common thread was that the paper was hard to follow for non-experts on MRI and the reviewers weren't entirely convinced by the experiments (asking for additional experiments and comparison to Zhang et al.). The authors comment on the challenge of implementing Zhang is acknowledged and it's unfortunate that cluster issues prevented additional experimental results. While ICLR certainly accepts application papers and particularly ones with interesting technical contribution in machine learning, given that the reviewers struggled to follow the paper through the application specific language it does seem like this isn't the right venue for the paper as written. Thus the recommendation is to reject. Perhaps a more application specific venue would be a better fit for this work. Otherwise, making the paper more accessible to the ML audience and providing experiments to justify the methodology beyond the application would make the paper much stronger.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Our approach does not make any Gaussian distribution assumption.\", \"comment\": \"First of all, thank you very much for the exhaustive review and the many valuable comments. We will answer each major comment separately.\\n\\n*Summary:* \\n- Empirical means and variances via posterior samples do not place assumptions on posterior\\n- We do not think our model has problems with mode collapse and will add experiments to verify this\\n- Since we do not have place assumptions, analytic computations of Fourier mean and std isn\\u2019t possible analytically.\\n- On replication, generalization and dataset concerns, see general comment.\\n- Cartesian and radial sampling both have tradeoffs, we will clear this up together with improving definition of MRI technical terms \\n\\n*Details:*\\n(1) For the first comment, we respectfully disagree that taking the empirical mean and variances implicitly assume a Gaussian distribution. The empirical mean and variance are simply statistics used to summarize the posterior distribution and do not place any distributional assumption on the posterior. This is a critical reason that distinguishes our contribution from the modelling of Zhang et al., where a Gaussian distribution is explicitly with diagonal covariance is assumed.\\nComments 1b) and 1c) are related to this: our mean and variance represent just this, a mean and a variance (i.e. variability) of a posterior distribution, which will represent all possible x which could have led to observation y. If there are multiple modes (meaning source images) in the distribution, this is the model working as intended: it correctly captures the uncertainty and guides acquisition to its sources. Once there is enough information, we actually want the reconstruction variability to collapse to the ground truth, the image that should have generated the observed data.\\nOf course mode collapse can still be problematic for our approach if it occurs in the sense that measurements of different ground truths get reconstructed as the same standard image, or if the reconstructions are always simpler versions without important diagnostic features, or the model becomes overconfident too quickly. We do not think this is a problem in our model. WGANs tend to be more robust to mode collapse than all other types of GANS not designed specifically to avoid it (see e.g. the study in [1], which shows WGANs struggle with generation quality much more than with mode collapse). Both the standard reconstruction and simpler version mode collapse would be detected with the error metrics (MSE, PSNR, SSIM\\u2026.) and by visual inspection. We will perform further experiments to show that our model does not get confident too quickly and does indeed capture all modes of the distribution , see also the response to comment 2c) and the general comment.\\nFinally, we respectfully disagree with 1d), as we have a general distribution at hand, we cannot analytically compute the transformation of its variance in Fourier. If it were Gaussian, we could have an analytical model of the dense covariance in the Fourier space and could sample from it directly. We will provide an experiment where we train our network to output a mean and variance and to minimize a Gaussian negative log-likelihood as suggested by reviewer 1. \\n\\n(2) Regarding the major comment 2), we will retrain our model on the publically available FastMRI dataset, with a procedure as close as possible to the one of Zhang et al. (cf. the general comment on this topic). We also refer you to the general comment on additional experiments for your comment 2c). \\n\\n(3) With respect to fast MRI, the title could be changed to accelerated MRI: the acceleration here originates from not acquiring entire lines in Fourier space. For Cartesian sampling, the acceleration obtained is linear: half of the lines not being acquired means that the scanning time is reduced by a factor 2. Cartesian sampling also has the advantage of being a fairly robust scanning trajectories, that is less susceptible to artefacts than for instance radial or spiral trajectories and consequently, they are the most widely used in the clinical setting; this is why we focused on it in this work. We will incorporate these explanations to the paper and will also discuss more in detail the inference time, adding a paragraph in our discussion on the subject. \\n\\n(4) Finally, we will try to thoroughly define the important MRI-terminology in a notations and definition section to clarify the presentation, and abstain from using the MRI-specific technical terms when possible.\\n\\n[1] Lala, Sayeri, et al. \\\"Evaluation of mode collapse in generative adversarial networks.\\\" High Performance Extreme Computing, IEEE (2018).\"}",
"{\"title\": \"Some clarification is needed.\", \"comment\": \"Thank you for your comments and suggestions.\", \"summary\": [\"We have some clarifying questions, GAN concerns and link to classifier calibration might not be applicable?\", \"Our model is a strong contender to be used in a real time fashion\", \"*Details:*\", \"Regarding the mentioned flaw in Adler\\u2019s paper, we are not sure to understand the argument of the reviewer, and we would like to ask you if you could make some points clearer - we note in passing that we are quite familiar with the GAN literature:\", \"We are not sure what the reviewer refers to by \\u201clack of an encoder\\u201d when mentioning that \\u201cMode collapse is an optimization problem, where the training set contains variability but the generator fails to learn it due to the lack of an encoder.\\u201d\", \"It is also unclear why \\u201cy the target empirical posterior distribution is a Dirac.\\u201d For us, the fact that we solve an inverse problem means a) we are interested in estimating x given y, not the inverse. The fact that we observe only a subset of fourier space means that there are inherently several possible x that could have generated the observed y on the manifold of data.\", \"We are also unsure whether we understand the meaning of the statement: \\u201cDue to the known issues with variance estimation, having p(y | x) as a density instead of a Dirac could very well change the behavior of the generator.\\u201d Does the reviewer mean to criticize our noiseless setting, i.e. Y=FX+e, with e=0? If yes, we plan to address this by performing an experiment in which we add gaussian noise to the observation process.\", \"As a final question, we do not quite understand the link the reviewer draws to classification calibration and uncertainty estimation via sampling from a posterior. Calibration refers to a classifier reporting meaningful confidence estimates when making predictions. We do *not* train to predict a specific variance, instead we sample from the posterior p(x|y) and obtain a distribution over images. We merely chose to *report* a variance as a statistic which captures uncertainty. If the generator captures the posterior distribution well (measurable by reconstruction error and absence of mode collapse, then the quality of the variance estimate should follow. As stated in the general comment, we plan to add experiments to the appendix which show a) that no mode collapse occurs and b) the variance behaves as expected. We will also change the language to emphasize the fact we do not estimate an uncertainty or confidence directly.\", \"We appreciate your point on the term \\u201creal time\\u201d, but still think there is value in exploring methods which - if sped up sufficiently - can in principle support real time adaptive sampling. Our method requires roughly 8 ms to yield an estimation for a single data point averaged on two samples (which can be already reduced to 4ms by processing each sample in parallel). This is already close to the readout speed, in the order of milliseconds. Since our model does not rely on any ground truth or distributional assumptions to yield uncertainty estimates, and already requires roughly 4 ms to yield an estimation for a single data point, we consider it as a good adaptive sampling candidate, especially considering the increased industry focus on hardware acceleration of deep neural networks. We will add further timing data and a discussion of real time requirements to the appendix.\"]}",
"{\"title\": \"Reconstruction and bayesian optimization/RL/active learning are different tasks.\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n*Summary: *\\n- Reconstruction and bayesian optimization/RL/active learning are different tasks\\n- For replication, generalization and dataset concerns, see general comments\\n- Contribution remains the same, other generative approaches have not yet been shown and we expect them to struggle with the high dimensionality\\n\\n*Details:*\\nRegarding the common on using uncertainty to drive optimization as in BO, there is a subtle distinction here. Our main contribution is a model which allows sampling from the posterior p(x|y), which is we use to perform reconstruction with pixel granular uncertainty quantification. This function can then be used to to drive adaptive sampling, but it is not a direct optimization task on any metric of reconstruction quality (SSIM, PSNR etc.) as would be required in order to apply BO, RL or active learning. Our CLOMDAS and CLUDAS algorithms represent just a simple approach that one would take in acquiring samples once one has access to a function which quantifies uncertainty of reconstructed regions (always sampling the most uncertain regions next), with CLUDAS being usable on real tasks (this is not the case of CLOMDAS). More elaborate versions built on this model could be imagined and in fact we are pursuing research in this direction.\\n\\nRegarding your first comment, we agree that a comparison with Zhang et al. would be critical, but their code is not available, even upon request. We will reuse the same data and reproduce as closely as possible their data processing pipeline, and we refer you to the general comment on the topic for more details.\\n\\nWe see the suggestions from your second comment as valuable potential research directions, however the main contribution of our method remains the demonstration that using a generative approach is possible to model a closed-loop adaptive sampling mechanism. We expect that Hamiltonian MCMC will struggle with the high dimensionality , as MCMC are already prone to slow convergence. \\n\\nRegarding your final comment, we actually have provided error bars in the results of Figures 6 and 7 in the appendix (they are simply very small), but we will also add them to Table 1 for completeness.\"}",
"{\"title\": \"Our contribution is about a closed-loop mask design system for MRI.\", \"comment\": \"Thank you for your comments, corrections and suggestions.\\n\\n*Summary:*\\n- Adler & \\u00d6ktem have a fundamentally different task: sampling the full image space with low SNR vs. subsampling frequencies directly\\n- MRI technical terms will be clarified/avoided when possible ; presentation will be updated and suggested experiments will be incorporated\\n\\n*Details:*\\nWe would like to highlight that our contribution lies mainly in the demonstration of a closed-loop system that can adaptively design sampling masks for MRI without requiring access to a ground truth (CLUDAS). Adler and \\u00d6ktem did not at all consider the problem of sampling mask design, as the problem of acceleration by choosing few locations where data are acquired is not present in CT. Rather, the challenge in CT is linked to obtain images with the lowest dose of radiation (i.e. data is sampled everywhere, but with low SNR). We also successfully demonstrate that the application of their method to MRI can be efficiently leveraged to provide an alternative to optimizing for MSE when designing sampling masks. \\n\\nThe updated version that we will submit by the end of the week should contained a clarified presentation, and a more rigorous introduction of MRI-specific jargon. We will also try to incorporate your suggestion of using a Unet-like mode with a Gaussian observation model, and will also perform experiments on the DICOM dataset used by Zhang et al. However, as mentioned in the general comment, we cannot compare our results with Zhang et al. directly due to their code not being available, even upon request.\"}",
"{\"title\": \"Proper comparison with Zhang et al. (2019) not possible since their code is not available\", \"comment\": [\"Summary:\", \"The code of Zhang et al. is not publically available, even upon request.\", \"Their setup is also not very realistic since it uses magnitudes only, discarding all phase information\", \"We will nonetheless try to replicate their setting as close as possible\", \"We will also add baselines and an experiment that shows mode collapse and generalization are not an issue\"], \"details\": \"The reviewers have pointed out the necessity of a comparison with the article of Zhang et al. (2019), with which our works bears similarities.\\n\\nWe previously reached out to the authors, and sadly, their code is not publically available. However, Zhang et al. (2019) described that they used the DICOM files of fastMRI as the basis for their training and evaluation. We would like to draw the reviewers\\u2019 attention towards the facts\\n1. That the DICOM images used by Zhang et al. are only magnitude images, and discard all phase information. This introduces a Hermitian symmetry in Fourier space, and consequently, the sampling must be done symmetrically around the center in Fourier space (cf. section 2 in their supplementary material). \\n2. That the resizing of images that they performed also changes the distribution of Fourier space in an unpredictable fashion. \\n\\n\\n**We will nonetheless retrain our model on this larger scale dataset following the methodology proposed in Zhang et al. (2019), resizing the images to 128x128, selecting the close-to-central images from each volume and normalizing the image with respect to the whole volume. We will however keep working with the complex data. **\\n\\nThese two modelling steps taken by Zhang et al. introduces additional unrealistic assumptions, which could limit the applicability of their method in real life applications. In addition to this, as discussed in the paper, their assumption of a Gaussian distribution with diagonal covariance is not a realistic assumption.\"}",
"{\"title\": \"General comment\", \"comment\": \"First of all, thank you for the very detailed feedback provided.\\n\\nWe identified two main axes of improvement for our manuscript, which we believe we can address and include in the paper by the end of the week.\\n\\n1. Improved clarity of presentation: We will reduce the MRI-specific language. We will delineate more clearly the theory section by breaking it into a) Notation and problem setting b) Background c) Methodology. \\n\\n2. Additional experimental validation: We will provide experimental results on the fastMRI dataset [1], investigate potential mode collapse issues, clarify the training procedure, and provide a baseline in the like of a network trained to approximate a Gaussian negative log likelihood. The reviewers also unanimously asked for comparison with Zhang et al., which is not possible, due to their code not being publicly available - but we believe that the results on the fastMRI dataset should address these concerns. We also refer the reviewers to the additional comment on that matter. \\n\\n[1] Zbontar, Jure, et al. \\\"fastmri: An open dataset and benchmarks for accelerated mri.\\\" arXiv preprint arXiv:1811.08839 (2018).\"}",
"{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper describes a method for accelerating MRI scans by proposing lines in k-space to acquire next. The proposals are based on posterior uncertainty estimates obtained from GAN-based reconstructions from parts of the k-space acquired thus far. The authors address an interesting and important problem of speeding up MRI scans and thus improving the subject's experience. The proposed method achieves better posterior uncertainty and SSIM scores than competing methods.\\n\\n\\n\\nWhile the paper considers an important problem and takes a novel approach to solve it (using a GAN generative model to estimate uncertainty), I found that it may be particularly inaccessible to non-experts in the field of MRI image processing. Furthermore, several important methodology-related questions remain unanswered in the paper; and the experiments offered in the paper are insufficient for convincingly arguing the author\\u2019s claims.\\n\\nSpecifically, it is unclear whether GANs with their mode dropping behaviour are the right model choice for proposing reconstructions - they are likely to drop modes and by extension - yield overconfident uncertainty estimates. This can be expected to be particularly problematic for scans of images that differ from the training distribution, and is exacerbated by the fact that authors train the model only on scans from healthy subjects. Furthermore, because of the the GANs propensity to drop modes, it is also unclear whether the posterior variance numbers reported in the paper are directly comparable between the methods.\\n\\nThe method used for obtaining the uncertainty estimates from GAN samples implicitly makes the assumption that reconstructions follow a Guassian distribution with a diagonal covariance. This assumption is also made in a competing method of Zhang et. al (2019) that the authors do not compare against, and claim to improve upon methodologically (i.e. the authors state that the method of Zhang et. al (2019) cannot be used to produce uncertainty estimates in Fourier space). I am not convinced that the authors claims about the method differences are sufficiently substantiated (see more under major comments). And because the methods bear significant similarity to each other, an experimental comparison - which is currently missing - should be carried out (on open datasets that ideally include non-healthy subjects).\\n\\nFinally, the paper teases fast(er) MRI in the title, but doesn\\u2019t touch on this topic in the text. This aspect should of the authors contribution be discussed at length, in particular comparing the Cartesian sampling strategy adopted by the authors to other strategies, as well as evaluating the feasibility of implementing the adaptive sampling strategy in an actual scanner (e.g. can the network be ran fast enough)?\\n\\n\\n==================\", \"major_comments\": \"==================\\n\\n1) In the proposed method the authors employ a procedure in which the currently sampled parts y of the k-space are fed to a generator network to obtain n_s reconstructions. These n_s sampled reconstructions are then averaged to obtain the empirical mean and variance, with the latter being used for estimating uncertainty. This procedure is potentially problematic for several reasons:\\n\\n a) First, taking the empirical mean and variance of the samples is in fact equivalent to assuming that the reconstructed image follows a Guassian distribution with a diagonal covariance. This is the same assumption the authors argue is not realistic when discussing the work of Zhang et al. (2019) in the end of Section 1. \\n\\n b) In case of GANs, which can model multi-modal distributions this uncertainty estimation is even more problematic in cases when the samples originate from different modes. What do the mean and variance represent then?\\n\\n c) As the authors highlight in the discussion section of the paper in the paper, GANs are prone to mode collapse. This is also potentially problematic for their estimator - in case of mode collapse their method would underestimate uncertainty. The fact that authors are able to use only two sampled reconstructions to estimate the mean and variance with acceptable accuracy is consistent with the occurrence of mode collapse in their generator. Furthermore, because mode collapse may occur in the author\\u2019s model, it is unsurprising that their method yields the smallest posterior variances in Table 1. The authors should provide evidence that mode collapse either does not occur or would not affect these numbers.\\n\\n d) Finally, the authors use the empirical mean and sampled reconstructions to obtain the empirical mean and variance in Fourier (k-) space. Specifically, they argue that \\u201cThis feature is specific to generative models, as getting samples from P_{X|y_\\\\omega} allows to transform these to a different domain [...] this is not possible with methods that only provide point-wise estimates of the mean and the variance in image space, such as the one used by Zhang et. al (2019)\\u201d. I don\\u2019t think this is true - Fourier transform is a linear transformation, thus given a mean and a variance in image space it is possible to deduce analytically what the mean and covariance in Fourier space would be. This should be elaborated in the text, and the comparison to Zhang et. al (2019) should thus be extended further.\\n\\n\\n2) The proposed method, CLUDAS, was evaluated against existing methods on a single proprietary dataset consisting of only 100 images from healthy individuals. This is potentially problematic, for several reasons:\\n\\n a) Using a proprietary dataset doesn\\u2019t allow follow-up works to compare against the author\\u2019s method; or for comparing CLUDAS to \\n existing methods not considered in the paper; the methods should additionally be compared on a public dataset and\\n\\n b) Applying the method only to a single (small) dataset does not allow for reasoning on how the method behaves in different data regimes. I strongly encourage the authors to apply their method on public datasets, for example on data used in Zhang et. al (2019) - this would then allow for comparing the two methods despite not having access to an implementation of Zhang et. al (2019).\\n\\n c) Since the data is used to train the GAN model is obtained from healthy individuals, it\\u2019s unclear whether it can be used to acquire data from subject that may potentially have aberrations in their scans - the GAN model would be expected to produce low uncertainty estimates for regions where these aberrations would lie and not propose acquiring parts of the k-space that could be used to resolve these aberrations. For similar reasons using a GAN that potentially drops modes (e.g. scans of unhealthy individuals, for example because they are less common in the training data) is also problematic. The authors should consider evaluating their method on a dataset that contains non-healthy subjects, and investigate the performance of the method when it\\u2019s evaluated on healthy subjects, but tested on unhealthy ones.\\n\\n3) The title of the paper (\\u201c[...] for fast MRI\\u201d) suggests that the authors aim to accelerate the MRI data acquisition process.\\n\\n a) Yet they chose the work with the Carterian sampling (i.e. sampling lines in the k-space parallel to the x-axis), which arguably requires larger sections of the k-space to be sampled before a high-quality reconstruction can be obtained (e.g. see http://mriquestions.com/k-space-trajectories.html). Providing information on how this choice influences the speed of data acquisition (and thus subject\\u2019s comfort) is important in order to assess the applicability of the author\\u2019s method to real world scenarios. This information should be provided.\\n\\n b) The paper does not actually describe how MRI is sped up. Given that the acquisition budget (number of lines acquired in k-space) is fixed, and assuming that time per line is constant, it is unclear where the speed up comes in. More generally, the speed claims / aspect of the proposed method should be discussed in more detail - how is speed measured and evaluated? How does it compare to non-Cartesian sampling approaches.\\n\\n c) Finally, it\\u2019s unclear whether neural network inference can be made fast enough to allow for a real world application of the adaptive CLUDAS sampling - can the data be transferred fast enough from the scanner to do inference and propose the next line to scan without causing delays in the scanning process? This should be discussed in the paper.\\n\\n4) Currently, the paper places a strong expectation of knowing about MRI and being familiar with MRI-specific terminology on the reader. This makes the work substantially less accessible to a wide audience with machine learning expertise as the common denominator. The authors should take steps towards making the text more accessible to non-(MRI) expert audience, for example by introducing some of the basic knowledge (e.g. the data acquisition and reconstruction processes in MRI) early on, departing from MRI-specific jargon (e.g. k-space, lines in k-space) in favour of ML terminology whenever possible, and taking care to define and possibly illustrate (strongly encouraged) the MRI-specific concepts (e.g. the k-space, lines in k-space, sampling masks). Some specific examples the authors should address follow.\\n\\n a) The k-space is defined only in passing as being the frequency domain. It\\u2019s unclear whether these lines (which correspond to sampling masks) in this domain are parallel to the x-axis. The math (e.g. Equation 1) and Figure 2 suggest that, but it\\u2019s not obvious.\\nx is referred to both as \\\"model parameter\\\" in Section 2 (and Equation 1) as well as the \\\"ground truth image\\\" later in the same section. This is confusing, especially because in case of GAN model parameter would typically refer parameters of the generator and discriminator.\\n\\n b) SSIM is not defined anywhere, but already used in the abstract.\\n\\n c) It could be made more clearly why undersampling is required in case of MRI. E.g. how/why does it correlation with patient comfort.\\n\\n d) \\u201cK-space sampling\\u201d is used already in the introduction, but not really defined.\\n\\n e) It\\u2019s unclear whether sampling, subsampling and undersampling all refer to the same concept or not.\\n\\n f) The term \\u201csampling mask\\u201d is already used in the introduction, but not clear what it refers to.\\n\\n g) Use of the term \\u201cinnovation\\u201d to refer v_t, which appears to be a one-hot vector marking the newly added line in k-space.\\n\\n h) The use of term \\u201csampling decision\\u201d to refer to v_i.\\n\\n i) Unclear what eta in \\u201cZ ~ eta\\u201c in Equation 5 refers to - it is not defined. Later in Section 2.2 it is stated that \\u201cz_i are independent samples form Z\\u201d. Is this the same as \\u201cz_i ~ eta\\u201d? If so, why the second layer of notation?\\n\\n k) In Introduction \\u201c[...] which is not feasible on a real problem without the ground truth available\\u201d. Unclear what the ground truth refers to, I assume it\\u2019s the ground truth image.\\n\\n l) In Section 2.1 data refers to x_i, with i=1,...,m; which I assume are images and thus contain real numbers. Yet Section 3 states \\u201cAs our data are complex [...]\\u201d. This is confusing - is that different data?\\n\\n m) Sampling is used ambiguously in the paper - to refer to sampling in the k-space (e.g. sampling masks, sampling decisions v_1,...,v_n) and to refer to sampling reconstructions from the generator. This should be resolved to improve readability of the paper.\\n\\n n) Not entirely clear what k-space pixel-wise variances are. And what is the difference between spatial and pixels-wise variances is (Section 2.2).\\n\\n\\n\\n==================\", \"minor_comments\": \"==================\\n\\n1. In the Introduction the authors argue that metrics such as MSE and SSIM \\u201c[..] do not align with what clinicians see as valuable.\\u201d, yet use these throughout the paper for evaluating and comparing methods. This decision should be explained.\\n\\n2. In Section 2 (and beyond) images x are described as belonging to subspace C^p of complex numbers. While this is technically true, them to actually belong to the space of R^p. If so, this should be reflected in the text for the benefit of the readers. Furthermore, I don\\u2019t think dimensionality p is actually defined anywhere.\\n\\n3. Compressive / compressed sensing abbreviations CS is defined twice in Section 1.\\n\\n4. It\\u2019s not entirely clear why \\u201cThe CS-inspired methods shift the burden from acquisition to reconstruction [...]\\u201d\\n\\n5. Incorrect double quotes are used throughout the text (both are right quotes).\\n\\n6. In Introduction \\u201c[...] yielding an estimator which can be used to drive back the whole sampling process in a closed-loop fashion.\\u201d \\nit is unclear what it means to \\u201cdrive back a sampling process\\u201d. Perhaps \\u201cback\\u201d should not be there?\\n\\n7. In some cases it is unclear what the use of double quotes conveys, e.g.\\n\\n a) In Section 2.1 \\u201c[...] being the \\u201cfull\\u201d mask [...]\\u201d\\n\\n b) In Section 2 \\u201c[...] ground truth \\u201ccomplete\\u201d images [...]\\u201d\\n\\n c) In Section 5 \\u201c[...] these inverse problems \\u201cdepend\\u201d from each other [...]\\u201d\\n\\n d) Multiple places in Appendix B.\\n\\n8. The convention of using small letters x, y for data samples / instances and capital letters X, Y for random variables could be made explicit.\\n\\n9. Section mostly provides background (rather than theory) and could be named accordingly.\\n\\n10. In Section 2.1, t refers to \\u201ctime\\u201d. It may be clear if it were instead referred to as the step of the sampling process or something similar - the use of \\u201ctime\\u201d to refer to some discrete set of actions can be a little confusing.\\n\\n11. Section 2.1 refers to \\u201c[...] the online reconstruction speed of DL [...]\\u201d. This should be explained further - why are deep learning based approaches to reconstruction faster? Does this depend only on having the right hardware accelerators? Also, I don\\u2019t think the abbreviation \\u201cDL\\u201d was introduced.\\n\\n12. Equation 4 is referred to both as an \\u201cEquation\\u201d and as a \\u201cProblem\\u201d.\\n\\n13. In Equation 6 there is a summation over j from v_i. It is my understanding that v_i is a one-hot vector and it\\u2019s unclear what this summation means. Presumably it is the summation over pixels covered by line v_i, but the notation doesn\\u2019t convey this. It could be nice to also explain the \\u201c1D\\u201d superscript in this equation.\\n\\n14. In Section 2.2 \\u201c[...] this is why the approach of (Adler & Oktem, 2018) minimizes over distance for observation in Equation 4 [...]\\u201d. I couldn\\u2019t follow the part about minimizing over the distance for an observation. Please consider making this more clear.\\n\\n15. In Equation 8, what does index i run over?\\n\\n16. Minor typos / textual issues:\\n\\n a) In Introduction \\u201c[...] of the our estimator [...]\\u201d\\n\\n b) In Introduction \\u201c[...] and show that even using a few samples [...]\\u201d -> even when using?\\n\\n c) In Section 2.2, after Equation 5 \\u201c[...] where t After finding the optimal\\u201d.\\n\\n d) Inconsistent use of \\u201cclosed-loop\\u201d and \\u201cclosed loop\\u201d.\\n\\n e) In multiple places throughout the paper a double space appears to be used instead of a single one.\\n\\n f) Section 4.1 \\u201cas can be in Figure 1\\u201d\\n\\n g) Section 4.3 \\u201csamples art random\\u201d\\n\\n h) In Section 5 \\u201c[...] \\u201cdepend\\u201d from each other\\u201d should be depend on each other?\\n\\n i) In Section 5 \\u201c[...] we we found that [...]\\u201d\\n\\n17. Unclear what\\u2019s meant by \\u201c[...] using the aggregated variance as a loss function\\u201d in Section 2.2\\n\\n18. Section to refers to i* (integer scalar) as a line. Previously it was v_i (vector).\\n\\n19. In Section 2.2 the authors state \\u201cOnce the generator has been trained until convergence [...]\\u201d. The authors optimize a generator in an adversarial fashion. To my knowledge, this training procedure is not guaranteed to converge and would typically oscillate around a stable solution. Could the authors please comment on what they mean by convergence in this case and how they guarantee that the generators they train converge.\\n\\n20. For the posterior variance results in Table 1 it should be discussed whether all the methods obtain / compute the variances the same way.\\n\\n21. The \\u201cdata consistency layer\\u201d (Section 3) should be explained briefly. How does it enforce perfect consistency and what\\u2019s meant by consistency here?\\n\\n22. It should be made clear what MSE is calculated between in Figure 1 and Table 1. I assume it\\u2019s between ground truth (all frequencies sampled) and reconstructed (only some frequencies samples) images.\\n\\n23. It\\u2019s not entirely clear what\\u2019s meant by consistency in Section 4.1\\n\\n24. What do yellow arrows signify in Figure 2?\\n\\n25. I don\\u2019t think the abbreviation UQ was defined in Figure 2.\\n\\n26. The masks in Figure 2 are such that is a certain line was chosen at lower sampling rate (left on the x-axis), it would also be chosen at higher sampling rate (right on the x-axis). This is somewhat unexpected since the budget of lines v_i to be acquired differs between sampling rates. Why is there such consistency?\\n\\n27. In Table 1, in the leftmost column, the number in brackets (number of posterior samples?) should be defined.\\n\\n28. In Table 1 and Section 4.3: LBC-M and LBC-U - the -M and -U suffixes should be explained. What are the differences between the methods?\\n\\n29. In Section 4.3: what are the FE methods?\\n\\n30. Appendix A.3 is mostly a copy-paste from the main text. Unnecessary duplication?\\n\\n31. Axes in Figure 4 (Appendix A.2) are not labeled or described.\\n\\n32. Loss in A.4 (Equation) does not match Equation 5. In the former the discriminator takes three arguments instead of two, and the arguments z1 and z2 are not described.\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper is quite well written and the idea is novel. However, the results are rather weak. The authors present a method to perform adaptive MR compressed sensing, i.e. decide online which readout to sample next. They compare it to an offline learning method where one sampling pattern is optimized for a whole training set, then applied to test data. The offline method performs better in terms of MSE, which is the loss it was trained for, meaning that the authors have not demonstrated a gain in adapting the sampling pattern to individual scans.\\n\\nThe primary concern with the paper is not with the author\\u2019s contribution, but with serious flaws in [Adler 2018] that unfortunately snowball into this one. While the authors in [Adler 2018] do acknowledge issues with learning a variance, they misdiagnose the problem as mode collapse. Mode collapse is an optimization problem, where the training set contains variability but the generator fails to learn it due to the lack of an encoder. That is not the case here: all the variability of the training set is encapsulated in y, and for each y the target empirical posterior distribution is a Dirac. This is very similar to the calibration problem in classification [1], where classifiers become overconfident because they are trained to always output 0 or 1. If the generator does not learn a Dirac, it can only be because of regularization (either explicit or implicit in the model architecture) or optimization failure (either involuntary or voluntary with early stopping.) Tweaking the loss as advocated in [Adler 2018] does not fundamentally change the problem as long as the loss is minimal at the target empirical posterior. It may change the dynamic behavior and result in posteriors with more variance when combined with early stopping, but those variances are not calibrated, i.e. they have not been trained to match the variances of the true continuous posteriors. In order to learn the variances, one would have to either provide multiple posterior samples for each y during training (not practical in this case,) or perform some kind of calibration on the validation dataset as in [1], i.e. learn the mean and variance from different data, which effectively uses the network\\u2019s interpolation properties as a proxy for true random sampling.\\n\\nHowever this flaw does not invalidate the practical approach developed in the paper, but it seriously undermines its qualifications as a rigorous, principled, Bayesian approach. It also makes the reporting of posterior variances as final quality metrics pretty much useless since they are not interpretable: does lower variance mean that the generator got better at estimating the missing information, or that it got worse at estimating the true posterior variance? I would suggest to at least remove the variances highlights from Table 1 and Table 4, and maybe scrap the data altogether. The paragraph on posterior estimation should also be updated to represent the whole scope of the problem.\", \"section_2\": \"Theory: suggest to remove \\u201cWithout loss of generality\\u201d. Due to the known issues with variance estimation, having p(y | x) as a density instead of a Dirac could very well change the behavior of the generator.\\n\\nSection 2.1 Adaptive masks. The whole first paragraph is somewhat misleading and should be revised. Real-time reconstruction is indeed possible without deep learning, see [2] for example. Furthermore, real-time reconstruction is not nearly fast enough for adaptive sampling. Real-time reconstruction means that reconstructing an image is at least as fast as scanning the whole image, i.e. in the order of 0.1 to 1s., but for adaptive sampling one must reconstruct at least as fast as the time between two successive readouts, i.e. in the order of 1 to 10 ms. Both [Jin 2019] and [Zhang 2019] only showed single-coil offline simulations with no indications of the reconstruction time and so do the authors.\", \"figure_1\": \"I must be missing something here. How can the image-domain and Fourier-domain figures be different? The Fourier transform being orthogonal, norms and variances should be the same in both domains.\\n\\n[1] C. Guo, G. Pleiss, Y. Sun and K. Q. Weinberger, \\u201cOn Calibration of Modern Neural Networks\\u201d, ICML 2017 70:1321-1330.\\n[2] M. Uecker, S. Zhang, and J. Frahm, \\u201cNonlinear Inverse Reconstruction for Real-Time MRI of the Human Heart Using Undersampled Radial FLASH\\u201d, MRM 63:1456-1462 (2010).\"}",
"{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an uncertainty driven acquisition for MRI reconstruction. Contrary to most previous approaches (which try to get best reconstruction for a fixed sampling pattern) the method incorporates an adaptive, on-the-fly masking building (which is similar in spirit to Zhang at al. 2019). The measurements to acquire are selected based on variance/uncertainty estimates coming from a conditional GAN model. This is mostly an \\\"application\\\" paper that is evaluated on one dataset.\", \"strengths\": [\"The paper studies an interesting problem of adaptive MRI reconstruction\", \"The review of MRI reconstruction techniques is well scoped\"], \"weaknesses\": \"- The evaluation is rather limited and performed on one, proprietary, relatively small sized dataset\\n- Some simple baselines might be missing\\n\\n\\nI like the idea of adaptive sampling in MRI. However, I'd slightly lean towards rejection of the paper. My main concerns are as follows:\\n\\nThe presentation of the paper could be improved. At the moment, the Theory section describes background information, related work and problem definition as well as the contribution of the paper. Maybe braking the section into related work, background and methodology (where the main contribution is presented) sections would improve the paper readability. \\n\\nThe paper uses a conditional GAN model (with a discriminator from Adler & Oktem, 2018 and a generator that is based on Schlemper et al. 2018). Making the methodological contribution to be rather limited. The main difference w.r.t. the previous papers seem to be the last paragraph of section 2.2 - the empirical variance estimation is performed in Fourier space. \\n\\nA simple baseline to compare might be to train a Unet-like model (e. g. Schlemper et al 2018) with a Gaussian observation model (outputting a mean and a variance per each pixel) and train it to minimize Gaussian NLL. At the test time, one could simply sample from the Gaussian model instead of taking just the argmax of the output. It might be the case that the assumption of gaussian image might be too simplistic, however, it would be interesting to show it experimentally. Note that when sampling from such model the empirical variance estimation could be performed is the Fourier space too.\\n\\nThe experimental evaluation is rather limited and the dataset used in the experimental section is small. Adding another dataset would make the paper stronger.\", \"other_comments\": \"There is a mention on training dataset and testing dataset -- there is no mention on validation set. How were the hyperparamenters of the conditional GAN selected?\\n\\nAs acknowledged by the authors, this paper bears several similarities with the work of Zhang at al. 2019. However, the approach is not compared to Zhang et al. Including this comparison would make the paper stronger.\\n\\nIt is interesting to see that CLUDAS outperforms CLOMDAS in terms of SSIM. If I understand this part properly, CLOMDAS uses ground truth image to estimate MSE. Is it expected that CLUDAS would outperform CLOMDAS? \\n\\nSection 5, Adaptive vs. fixed mask: \\\"We also have a simple generalization bound of the obtained mask, relaying on a simple application of Hoeffding's inequality.\\\" Could the authors add a citation or explain this part in more detail?\", \"some_typos\": \"\\\"...we aim make a series...\\\"\\n\\\".. define an closed-loop...\\\"\\n\\\"We choose adopt a greedy\\\"\\n\\\"... we we found that...\\\"\"}",
"{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes an active data acquisition framework for magnectic resonance imaging. A generative adversarial network is used to estimate the posterior distribution of the latent MRI image in a closed-loop and greedy manner where the uncertainty of the posterior image is used to guide the process.\\n\\nThe paper is well written and easy to follow. The main contribution seems to be the combination of deep Bayesian inversion in Adler & Oktem, 2018 with an uncertainty driven sampling framework. Using uncertainty to drive data acquisition and exploration is not a new idea; the concept has been applied to reinforcement learning, active learning, Bayesian optimisation, as instantiations of a broad class of methods in experimental design. The experimental results suggest that the technique can reduce the amount of time required to obtain good quality images from MRI scans which can potentially have a big financial impact. The technique is compared to several variants of compressed sensing approaches demonstrating superior performance.\", \"my_main_concerns_with_the_paper_are\": \"1. The key idea of using uncertainty to guide sampling was also the main concept in Zhang et al. 2019. This submitted paper highlights differences in the models but does not provide an experimental comparison. Since both papers share the same concepts, this reviewer considers that a comparison is critical.\\n\\n2. Deep Bayesian inversion approximates the posterior distribution by minimising the Wasserstein distance between the posterior and a parametrised generator. I find the idea potentially powerful, with the advantage of learning a generative model as well, but wonder how this compares in theory and in practice to simpler stochastic variational inference and modern Hamiltonian MCMC. The min-max formulation is notoriously difficult to optmise and might lead to many local optima and instabilities.\\n\\n3. Given the complexity of learning GANs and the sensitivity to initialization, results should contain more information such as the std of the MSE for several runs of the algorithm.\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.